Use a language model
These large language models understand and generate natural language. They power chatbots, search engines, writing aids, and more.
Use these for:
- Conversational AI: Chat and engage in natural dialogue. Get an AI assistant.
- Question answering: Provide informative answers to questions. Build a knowledge base.
- Text generation: Generate fluent continuations of text. Autocomplete your writing.
- Summarization: Summarize long passages of text. Get key points quickly.
- Translation: Translate between languages. Communicate across language barriers.
Language models keep getting bigger and better at these tasks. The largest models today exhibit impressive reasoning skills. But you can get great results from smaller, faster, cheaper models too.
Our Pick: Meta Llama 3 8B Instruct
Meta’s new Llama 3 8B Instruct is the clear choice for most applications. With 8B parameters, an 8K context window, and advanced instruction tuning on 15T+ tokens, it achieves state-of-the-art performance on a wide range of tasks. A fast, affordable and flexible language model.
Upgrade Pick: Meta Llama 3 70B Instruct
For the most demanding applications, Llama 3 70B Instruct is the top performer. Its massive 70B parameters and training on 15T+ tokens deliver unparalleled accuracy and nuance across complex language tasks.
The 70B model shares the same efficiency benefits and safety features as the 8B version. But with greater capacity, it excels at applications like content creation, conversational AI, and code generation.
Budget Pick: Flan-T5 XL
For latency-sensitive, cost-constrained applications, Flan-T5 XL remains a strong choice. While it can’t match Llama 3’s overall performance, its lean 3B parameter size makes it fast and economical for focused tasks.
If speed and cost are critical and your use case is well-defined, like classification or summarization, Flan-T5 XL delivers reliable results quickly and affordably.
Recommended models
meta / meta-llama-3-8b-instruct
An 8 billion parameter language model from Meta, fine tuned for chat completions
meta / meta-llama-3-70b-instruct
A 70 billion parameter language model from Meta, fine tuned for chat completions
meta / meta-llama-3-8b
Base version of Llama 3, an 8 billion parameter language model from Meta.
meta / llama-2-7b-chat
A 7 billion parameter language model from Meta, fine tuned for chat completions
meta / llama-2-70b-chat
A 70 billion parameter language model from Meta, fine tuned for chat completions
meta / llama-2-13b-chat
A 13 billion parameter language model from Meta, fine tuned for chat completions
mistralai / mistral-7b-v0.1
A 7 billion parameter language model from Mistral.
meta / meta-llama-3-70b
Base version of Llama 3, a 70 billion parameter language model from Meta.
01-ai / yi-34b-chat
The Yi series models are large language models trained from scratch by developers at 01.AI.
01-ai / yi-6b
The Yi series models are large language models trained from scratch by developers at 01.AI.
replicate / flan-t5-xl
A language model by Google for tasks like classification, summarization, and more
stability-ai / stablelm-tuned-alpha-7b
7 billion parameter version of Stability AI's language model
replicate / llama-7b
Transformers implementation of the LLaMA language model
google-deepmind / gemma-2b-it
2B instruct version of Google’s Gemma model
google-deepmind / gemma-7b-it
7B instruct version of Google’s Gemma model
nateraw / nous-hermes-2-solar-10.7b
Nous Hermes 2 - SOLAR 10.7B is the flagship Nous Research model on the SOLAR 10.7B base model..
kcaverly / nous-hermes-2-yi-34b-gguf
Nous Hermes 2 - Yi-34B is a state of the art Yi Fine-tune, fine tuned on GPT-4 generated synthetic data
replicate / gpt-j-6b
A large language model by EleutherAI
google-deepmind / gemma-7b
7B base version of Google’s Gemma model
nateraw / nous-hermes-llama2-awq
TheBloke/Nous-Hermes-Llama2-AWQ served with vLLM
01-ai / yi-6b-chat
The Yi series models are large language models trained from scratch by developers at 01.AI.
lucataco / qwen1.5-72b
Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data
lucataco / phi-2
Phi-2 by Microsoft
lucataco / qwen1.5-7b
Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data
google-deepmind / gemma-2b
2B base version of Google’s Gemma model
lucataco / qwen1.5-14b
Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data
lucataco / phixtral-2x2_8
phixtral-2x2_8 is the first Mixure of Experts (MoE) made with two microsoft/phi-2 models, inspired by the mistralai/Mixtral-8x7B-v0.1 architecture
adirik / mamba-2.8b
Base version of Mamba 2.8B, a 2.8 billion parameter state space language model
adirik / mamba-130m
Base version of Mamba 130M, a 130 million parameter state space language model
adirik / mamba-1.4b
Base version of Mamba 1.4B, a 1.4 billion parameter state space language model
adirik / mamba-2.8b-slimpj
Base version of Mamba 2.8B Slim Pyjama, a 2.8 billion parameter state space language model
adirik / mamba-370m
Base version of Mamba 370M, a 370 million parameter state space language model
adirik / mamba-790m
Base version of Mamba 790M, a 790 million parameter state space language model