lucataco / qwen1.5-0.5b

Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data

  • Public
  • 62 runs
  • T4
  • GitHub
  • License
Iterate in playground
  • Prediction

    lucataco/qwen1.5-0.5b:aeb5ca9e257ceb13ce4c781f292ef4cb0e09a12528e8ef1a07f3d98439a6c9d5
    ID
    yqrpa5zczjhwsnz243px3qyt7q
    Status
    Succeeded
    Source
    Web
    Hardware
    T4
    Total duration
    Created

    Input

    top_k
    1
    top_p
    1
    prompt
    Give me a short introduction to large language model.
    temperature
    1
    system_prompt
    You are a helpful assistant.
    max_new_tokens
    512
    repetition_penalty
    1

    Output

    A large language model is a type of artificial intelligence system that is designed to generate human-like text based on a large corpus of text data. These models are trained on a large dataset of text, which allows them to learn patterns and relationships in language that are not present in the training data. Once trained, large language models can be used to generate text in a variety of domains, such as natural language processing, machine translation, and text generation. They are often used in a variety of applications, such as chatbots, virtual assistants, and language translation services.
    Generated in

Want to make some of these yourself?

Run this model