Generate text
These large language models understand and generate natural language. They power chatbots, search engines, writing aids, and more.
Use these for:
- Conversational AI: Chat and engage in natural dialogue. Get an AI assistant.
- Question answering: Provide informative answers to questions. Build a knowledge base.
- Text generation: Generate fluent continuations of text. Autocomplete your writing.
- Summarization: Summarize long passages of text. Get key points quickly.
- Translation: Translate between languages. Communicate across language barriers.
Language models keep getting bigger and better at these tasks. The largest models today exhibit impressive reasoning skills. But you can get great results from smaller, faster, cheaper models too.
Our Pick: Meta Llama 3 8B Instruct
Meta’s new Llama 3 8B Instruct is the clear choice for most applications. With 8B parameters, an 8K context window, and advanced instruction tuning on 15T+ tokens, it achieves state-of-the-art performance on a wide range of tasks. A fast, affordable and flexible language model.
Upgrade Pick: Meta Llama 3 70B Instruct
For the most demanding applications, Llama 3 70B Instruct is the top performer. Its massive 70B parameters and training on 15T+ tokens deliver unparalleled accuracy and nuance across complex language tasks.
The 70B model shares the same efficiency benefits and safety features as the 8B version. But with greater capacity, it excels at applications like content creation, conversational AI, and code generation.
Budget Pick: Flan-T5 XL
For latency-sensitive, cost-constrained applications, Flan-T5 XL remains a strong choice. While it can’t match Llama 3’s overall performance, its lean 3B parameter size makes it fast and economical for focused tasks.
If speed and cost are critical and your use case is well-defined, like classification or summarization, Flan-T5 XL delivers reliable results quickly and affordably.
Featured models

anthropic / claude-3.7-sonnet
The most intelligent Claude model and the first hybrid reasoning model on the market (claude-3-7-sonnet-20250219)

anthropic / claude-3.5-haiku
Anthropic's fastest, most cost-effective model, with a 200K token context window (claude-3-5-haiku-20241022)

deepseek-ai / deepseek-r1
A reasoning model trained with reinforcement learning, on par with OpenAI o1
Recommended models

anthropic / claude-3.5-sonnet
Anthropic's most intelligent language model to date, with a 200K token context window and image understanding (claude-3-5-sonnet-20241022)

meta / meta-llama-3-70b
Base version of Llama 3, a 70 billion parameter language model from Meta.

meta / meta-llama-3-70b-instruct
A 70 billion parameter language model from Meta, fine tuned for chat completions

meta / meta-llama-3-8b-instruct
An 8 billion parameter language model from Meta, fine tuned for chat completions

meta / meta-llama-3-8b
Base version of Llama 3, an 8 billion parameter language model from Meta.

google-deepmind / gemma-7b
7B base version of Google’s Gemma model

google-deepmind / gemma-2b
2B base version of Google’s Gemma model

google-deepmind / gemma-7b-it
7B instruct version of Google’s Gemma model

google-deepmind / gemma-2b-it
2B instruct version of Google’s Gemma model

lucataco / phixtral-2x2_8
phixtral-2x2_8 is the first Mixure of Experts (MoE) made with two microsoft/phi-2 models, inspired by the mistralai/Mixtral-8x7B-v0.1 architecture

lucataco / qwen1.5-72b
Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data

lucataco / qwen1.5-7b
Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data

adirik / mamba-2.8b
Base version of Mamba 2.8B, a 2.8 billion parameter state space language model

adirik / mamba-130m
Base version of Mamba 130M, a 130 million parameter state space language model

adirik / mamba-370m
Base version of Mamba 370M, a 370 million parameter state space language model

adirik / mamba-790m
Base version of Mamba 790M, a 790 million parameter state space language model

adirik / mamba-2.8b-slimpj
Base version of Mamba 2.8B Slim Pyjama, a 2.8 billion parameter state space language model

adirik / mamba-1.4b
Base version of Mamba 1.4B, a 1.4 billion parameter state space language model

lucataco / phi-2
Phi-2 by Microsoft

nateraw / nous-hermes-2-solar-10.7b
Nous Hermes 2 - SOLAR 10.7B is the flagship Nous Research model on the SOLAR 10.7B base model..

kcaverly / nous-hermes-2-yi-34b-gguf
Nous Hermes 2 - Yi-34B is a state of the art Yi Fine-tune, fine tuned on GPT-4 generated synthetic data

01-ai / yi-34b-chat
The Yi series models are large language models trained from scratch by developers at 01.AI.

01-ai / yi-6b-chat
The Yi series models are large language models trained from scratch by developers at 01.AI.

01-ai / yi-6b
The Yi series models are large language models trained from scratch by developers at 01.AI.
nateraw / nous-hermes-llama2-awq
TheBloke/Nous-Hermes-Llama2-AWQ served with vLLM

stability-ai / stablelm-tuned-alpha-7b
7 billion parameter version of Stability AI's language model

replicate / flan-t5-xl
A language model by Google for tasks like classification, summarization, and more

replicate / gpt-j-6b
A large language model by EleutherAI

replicate / llama-7b
Transformers implementation of the LLaMA language model