lucataco / qwen1.5-4b

Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data

  • Public
  • 1.4K runs
  • T4
  • GitHub
  • License
Iterate in playground

Input

string
Shift + Return to add a new line

Input prompt

Default: "Give me a short introduction to large language model."

string
Shift + Return to add a new line

System prompt

Default: "You are a helpful assistant."

integer
(minimum: 1, maximum: 32768)

The maximum number of tokens to generate

Default: 512

number
(minimum: 0.1, maximum: 5)

Adjusts randomness of outputs, greater than 1 is random and 0 is deterministic, 0.75 is a good starting value.

Default: 1

number
(minimum: 0.01, maximum: 1)

When decoding text, samples from the top p percentage of most likely tokens; lower to ignore less likely tokens.

Default: 1

integer

When decoding text, samples from the top k most likely tokens; lower to ignore less likely tokens.

Default: 1

number
(minimum: 0.01, maximum: 10)

Penalty for repeated words in generated text; 1 is no penalty, values greater than 1 discourage repetition, less than 1 encourage it.

Default: 1

integer

The seed for the random number generator

Output

A large language model is a type of artificial intelligence system that is trained on a large corpus of text data to learn the patterns and relationships between words and phrases. These models are designed to be able to generate human-like responses to a wide range of prompts and questions, and are often used in applications such as chatbots, language translation, and content generation. Large language models are typically trained using deep learning techniques, which involve feeding the model large amounts of text data and adjusting its parameters to optimize its performance on a specific task. Some of the most well-known large language models include GPT-3, BERT, and RoBERTa.
Generated in

Run time and cost

This model costs approximately $0.00022 to run on Replicate, or 4545 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia T4 GPU hardware. Predictions typically complete within 1 seconds.

Readme

Qwen1.5-4B-Chat-GPTQ-Int8

Introduction

Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include:

  • 6 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, and 72B;
  • Significant performance improvement in human preference for chat models;
  • Multilingual support of both base and chat models;
  • Stable support of 32K context length for models of all sizes
  • No need of trust_remote_code.

For more details, please refer to our blog post and GitHub repo.

Model Details

Qwen1.5 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. For the beta version, temporarily we did not include GQA and the mixture of SWA and full attention.

Training details

We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization. However, DPO leads to improvements in human preference evaluation but degradation in benchmark evaluation. In the very near future, we will fix both problems.

Requirements

The code of Qwen1.5 has been in the latest Hugging face transformers and we advise you to install transformers>=4.37.0, or you might encounter the following error:

KeyError: 'qwen2'

Tips

  • If you encounter code switching or other bad cases, we advise you to use our provided hyper-parameters in generation_config.json.

Citation

If you find our work helpful, feel free to give us a cite.

@article{qwen,
  title={Qwen Technical Report},
  author={Jinze Bai and Shuai Bai and Yunfei Chu and Zeyu Cui and Kai Dang and Xiaodong Deng and Yang Fan and Wenbin Ge and Yu Han and Fei Huang and Binyuan Hui and Luo Ji and Mei Li and Junyang Lin and Runji Lin and Dayiheng Liu and Gao Liu and Chengqiang Lu and Keming Lu and Jianxin Ma and Rui Men and Xingzhang Ren and Xuancheng Ren and Chuanqi Tan and Sinan Tan and Jianhong Tu and Peng Wang and Shijie Wang and Wei Wang and Shengguang Wu and Benfeng Xu and Jin Xu and An Yang and Hao Yang and Jian Yang and Shusheng Yang and Yang Yao and Bowen Yu and Hongyi Yuan and Zheng Yuan and Jianwei Zhang and Xingxuan Zhang and Yichang Zhang and Zhenru Zhang and Chang Zhou and Jingren Zhou and Xiaohuan Zhou and Tianhang Zhu},
  journal={arXiv preprint arXiv:2309.16609},
  year={2023}
}