Collections

Use a language model

These large language models understand and generate natural language. They power chatbots, search engines, writing aids, and more.

Use these for:

  • Conversational AI: Chat and engage in natural dialogue. Get an AI assistant.
  • Question answering: Provide informative answers to questions. Build a knowledge base.
  • Text generation: Generate fluent continuations of text. Autocomplete your writing.
  • Summarization: Summarize long passages of text. Get key points quickly.
  • Translation: Translate between languages. Communicate across language barriers.

Language models keep getting bigger and better at these tasks. The largest models today exhibit impressive reasoning skills. But you can get great results from smaller, faster, cheaper models too.

Our Pick: Meta Llama 3 8B Instruct

Meta’s new Llama 3 8B Instruct is the clear choice for most applications. With 8B parameters, an 8K context window, and advanced instruction tuning on 15T+ tokens, it achieves state-of-the-art performance on a wide range of tasks. A fast, affordable and flexible language model.

Upgrade Pick: Meta Llama 3 70B Instruct

For the most demanding applications, Llama 3 70B Instruct is the top performer. Its massive 70B parameters and training on 15T+ tokens deliver unparalleled accuracy and nuance across complex language tasks.

The 70B model shares the same efficiency benefits and safety features as the 8B version. But with greater capacity, it excels at applications like content creation, conversational AI, and code generation.

Budget Pick:  Flan-T5 XL

For latency-sensitive, cost-constrained applications, Flan-T5 XL remains a strong choice. While it can’t match Llama 3’s overall performance, its lean 3B parameter size makes it fast and economical for focused tasks.

If speed and cost are critical and your use case is well-defined, like classification or summarization, Flan-T5 XL delivers reliable results quickly and affordably.

Recommended models

mistralai/mixtral-8x7b-instruct-v0.1

The Mixtral-8x7B-instruct-v0.1 Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts tuned to be a helpful assistant.

6.3M runs

meta/llama-2-70b-chat

A 70 billion parameter language model from Meta, fine tuned for chat completions

5.2M runs

meta/llama-2-7b-chat

A 7 billion parameter language model from Meta, fine tuned for chat completions

4.6M runs

meta/llama-2-13b-chat

A 13 billion parameter language model from Meta, fine tuned for chat completions

4M runs

meta/meta-llama-3-70b-instruct

A 70 billion parameter language model from Meta, fine tuned for chat completions

2.4M runs

mistralai/mistral-7b-instruct-v0.2

The Mistral-7B-Instruct-v0.2 Large Language Model (LLM) is an improved instruct fine-tuned version of Mistral-7B-Instruct-v0.1.

2.1M runs

mistralai/mistral-7b-instruct-v0.1

An instruction-tuned 7 billion parameter language model from Mistral

869.2K runs

mistralai/mistral-7b-v0.1

A 7 billion parameter language model from Mistral.

676.9K runs

replicate/dolly-v2-12b

An open source instruction-tuned large language model developed by Databricks

453.1K runs

meta/meta-llama-3-8b

Base version of Llama 3, an 8 billion parameter language model from Meta.

258.3K runs

replicate/vicuna-13b

A large language model that's been fine-tuned on ChatGPT interactions

251K runs

01-ai/yi-34b-chat

The Yi series models are large language models trained from scratch by developers at 01.AI.

231.8K runs

meta/meta-llama-3-8b-instruct

An 8 billion parameter language model from Meta, fine tuned for chat completions

210K runs

01-ai/yi-6b

The Yi series models are large language models trained from scratch by developers at 01.AI.

158.1K runs

replicate/flan-t5-xl

A language model by Google for tasks like classification, summarization, and more

131.8K runs

stability-ai/stablelm-tuned-alpha-7b

7 billion parameter version of Stability AI's language model

110.6K runs

replicate/llama-7b

Transformers implementation of the LLaMA language model

97.9K runs

google-deepmind/gemma-2b-it

2B instruct version of Google’s Gemma model

78.8K runs

nateraw/nous-hermes-2-solar-10.7b

Nous Hermes 2 - SOLAR 10.7B is the flagship Nous Research model on the SOLAR 10.7B base model..

46K runs

google-deepmind/gemma-7b-it

7B instruct version of Google’s Gemma model

33.9K runs

replicate/oasst-sft-1-pythia-12b

An open source instruction-tuned large language model developed by Open-Assistant

32.4K runs

kcaverly/nous-hermes-2-yi-34b-gguf

Nous Hermes 2 - Yi-34B is a state of the art Yi Fine-tune, fine tuned on GPT-4 generated synthetic data

8.5K runs

nateraw/nous-hermes-llama2-awq

TheBloke/Nous-Hermes-Llama2-AWQ served with vLLM

7.2K runs

google-deepmind/gemma-7b

7B base version of Google’s Gemma model

6.7K runs

replicate/gpt-j-6b

A large language model by EleutherAI

6.2K runs

meta/meta-llama-3-70b

Base version of Llama 3, a 70 billion parameter language model from Meta.

5.7K runs

lucataco/qwen1.5-72b

Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data

3.7K runs

01-ai/yi-6b-chat

The Yi series models are large language models trained from scratch by developers at 01.AI.

3.6K runs

lucataco/phi-2

Phi-2 by Microsoft

2.3K runs

replit/replit-code-v1-3b

Generate code with Replit's replit-code-v1-3b large language model

1.9K runs

google-deepmind/gemma-2b

2B base version of Google’s Gemma model

427 runs

lucataco/qwen1.5-14b

Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data

219 runs

lucataco/phixtral-2x2_​8

phixtral-2x2_8 is the first Mixure of Experts (MoE) made with two microsoft/phi-2 models, inspired by the mistralai/Mixtral-8x7B-v0.1 architecture

175 runs

adirik/mamba-2.8b

Base version of Mamba 2.8B, a 2.8 billion parameter state space language model

167 runs

lucataco/olmo-7b

OLMo is a series of Open Language Models designed to enable the science of language models

69 runs

adirik/mamba-130m

Base version of Mamba 130M, a 130 million parameter state space language model

62 runs

lucataco/qwen1.5-7b

Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data

59 runs

adirik/mamba-2.8b-slimpj

Base version of Mamba 2.8B Slim Pyjama, a 2.8 billion parameter state space language model

39 runs

adirik/mamba-1.4b

Base version of Mamba 1.4B, a 1.4 billion parameter state space language model

37 runs

adirik/mamba-370m

Base version of Mamba 370M, a 370 million parameter state space language model

27 runs

adirik/mamba-790m

Base version of Mamba 790M, a 790 million parameter state space language model

18 runs