lucataco / qwen1.5-110b

Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data

  • Public
  • 2.7K runs
  • 4x A100 (80GB)
  • GitHub
  • License
Run with an API
  • Prediction

    lucataco/qwen1.5-110b:af7953cb4fe4948df44a074d4785c2f74d0096257197198e90c9ac84361b6aa9
    ID
    nytm2kwjsdrgp0cf3jrsy25yaw
    Status
    Succeeded
    Source
    Web
    Hardware
    4x A100 (80GB)
    Total duration
    Created

    Input

    top_k
    50
    top_p
    0.8
    prompt
    Provide a short introduction to large language models
    temperature
    0.7
    system_prompt
    You are a helpful assistant.
    max_new_tokens
    256
    repetition_penalty
    1.05

    Output

    Large language models (LLMs) are artificial intelligence systems that have been trained on massive amounts of text data to generate human-like language output. They are capable of understanding and generating natural language, and can be used for a wide range of tasks such as language translation, summarization, question answering, and even creative writing. LLMs have become increasingly popular in recent years due to their ability to perform complex language tasks with high accuracy and speed. However, they also raise ethical concerns around issues such as bias and privacy.
    Generated in

Want to make some of these yourself?

Run this model