Collections

Streaming language models

Language models that support streaming responses. See https://replicate.com/docs/streaming

Models we recommend

meta/llama-2-70b-chat

A 70 billion parameter language model from Meta, fine tuned for chat completions

4.5M runs

yorickvp/llava-13b

Visual instruction tuning towards large language and vision models with GPT-4 level capabilities

4.2M runs

meta/llama-2-13b-chat

A 13 billion parameter language model from Meta, fine tuned for chat completions

3.8M runs

meta/llama-2-7b-chat

A 7 billion parameter language model from Meta, fine tuned for chat completions

3.4M runs

mistralai/mixtral-8x7b-instruct-v0.1

The Mixtral-8x7B-instruct-v0.1 Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts tuned to be a helpful assistant.

1.6M runs

fofr/prompt-classifier

Determines the toxicity of text to image prompts, llama-13b fine-tune. [SAFETY_RANKING] between 0 (safe) and 10 (toxic)

1.4M runs

mistralai/mistral-7b-instruct-v0.2

The Mistral-7B-Instruct-v0.2 Large Language Model (LLM) is an improved instruct fine-tuned version of Mistral-7B-Instruct-v0.1.

837K runs

mistralai/mistral-7b-instruct-v0.1

An instruction-tuned 7 billion parameter language model from Mistral

783.4K runs

replicate/dolly-v2-12b

An open source instruction-tuned large language model developed by Databricks

453.1K runs

joehoover/instructblip-vicuna13b

An instruction-tuned multi-modal model based on BLIP-2 and Vicuna-13B

251K runs

replicate/vicuna-13b

A large language model that's been fine-tuned on ChatGPT interactions

235.7K runs

mistralai/mistral-7b-v0.1

A 7 billion parameter language model from Mistral.

199.4K runs

meta/llama-2-7b

Base version of Llama 2 7B, a 7 billion parameter language model

197.9K runs

01-ai/yi-34b-chat

The Yi series models are large language models trained from scratch by developers at 01.AI.

186.6K runs

01-ai/yi-6b

The Yi series models are large language models trained from scratch by developers at 01.AI.

157.6K runs

meta/llama-2-70b

Base version of Llama 2, a 70 billion parameter language model from Meta.

157.3K runs

yorickvp/llava-v1.6-34b

LLaVA v1.6: Large Language and Vision Assistant (Nous-Hermes-2-34B)

121.3K runs

replicate/flan-t5-xl

A language model by Google for tasks like classification, summarization, and more

118.3K runs

spuuntries/flatdolphinmaid-8x7b-gguf

Undi95's FlatDolphinMaid 8x7B Mixtral Merge, GGUF Q5_K_M quantized by TheBloke.

104.2K runs

meta/codellama-13b

A 13 billion parameter Llama tuned for code completion

102.1K runs

stability-ai/stablelm-tuned-alpha-7b

7 billion parameter version of Stability AI's language model

100.5K runs

replicate/llama-7b

Transformers implementation of the LLaMA language model

96.7K runs

meta/codellama-34b-instruct

A 34 billion parameter Llama tuned for coding and conversation

84.2K runs

nateraw/openchat_​3.5-awq

OpenChat: Advancing Open-source Language Models with Mixed-Quality Data

72.3K runs

andreasjansson/sheep-duck-llama-2-70b-v1-1-gguf

72K runs

nateraw/goliath-120b

An auto-regressive causal LM created by combining 2x finetuned Llama-2 70B into one.

68.2K runs

nateraw/mistral-7b-openorca

Mistral-7B-v0.1 fine tuned for chat with the OpenOrca dataset.

61.1K runs

joehoover/mplug-owl

An instruction-tuned multimodal large language model that generates text based on user-provided prompts and images

52.9K runs

fofr/image-prompts

Generate image prompts for Midjourney. Prefix inputs with "Image: "

50.3K runs

yorickvp/llava-v1.6-mistral-7b

LLaVA v1.6: Large Language and Vision Assistant (Mistral-7B)

38.5K runs

kcaverly/dolphin-2.5-mixtral-8x7b-gguf

Mixtral-8x7b MOE model trained for chat with the dolphin dataset, quantized

37.9K runs

antoinelyset/openhermes-2-mistral-7b-awq

37.5K runs

joehoover/falcon-40b-instruct

A 40 billion parameter language model trained to follow human instructions.

36K runs

meta/codellama-13b-instruct

A 13 billion parameter Llama tuned for coding and conversation

34.5K runs

meta/llama-2-13b

Base version of Llama 2 13B, a 13 billion parameter language model

33.4K runs

replicate/oasst-sft-1-pythia-12b

An open source instruction-tuned large language model developed by Open-Assistant

32.4K runs

meta/codellama-7b-instruct

A 7 billion parameter Llama tuned for coding and conversation

27.7K runs

nateraw/nous-hermes-2-solar-10.7b

Nous Hermes 2 - SOLAR 10.7B is the flagship Nous Research model on the SOLAR 10.7B base model..

21.2K runs

lucataco/dolphin-2.2.1-mistral-7b

Mistral-7B-v0.1 fine tuned for chat with the Dolphin dataset (an open-source implementation of Microsoft's Orca)

19.3K runs

uwulewd/airoboros-llama-2-70b

Inference Airoboros L2 70B 2.1 - GPTQ using ExLlama.

16.7K runs

yorickvp/llava-v1.6-vicuna-13b

LLaVA v1.6: Large Language and Vision Assistant (Vicuna-13B)

16K runs

replicate/lifeboat-70b

15.5K runs

meta/codellama-7b

A 7 billion parameter Llama tuned for coding and conversation

13.7K runs

gregwdata/defog-sqlcoder-q8

Defog's SQLCoder is a state-of-the-art LLM for converting natural language questions to SQL queries. SQLCoder is a 15B parameter fine-tuned on a base StarCoder model.

12.3K runs

antoinelyset/openhermes-2.5-mistral-7b

11.8K runs

lucataco/dolphin-2.1-mistral-7b

Mistral-7B-v0.1 fine tuned for chat with the Dolphin dataset (an open-source implementation of Microsoft's Orca)

11.1K runs

meta/codellama-70b-instruct

A 70 billion parameter Llama tuned for coding and conversation

10.7K runs

nomagick/chatglm2-6b

ChatGLM2-6B: An Open Bilingual Chat LLM | 开源双语对话语言模型

10.6K runs

meta/codellama-34b

A 34 billion parameter Llama tuned for coding and conversation

9K runs

kcaverly/nous-hermes-2-yi-34b-gguf

Nous Hermes 2 - Yi-34B is a state of the art Yi Fine-tune, fine tuned on GPT-4 generated synthetic data

8.3K runs

replicate/mpt-7b-storywriter

A 7B parameter LLM fine-tuned to support contexts with more than 65K tokens

8.2K runs

nomagick/chatglm3-6b

A 6B parameter open bilingual chat LLM | 开源双语对话语言模型

7.2K runs

nateraw/nous-hermes-llama2-awq

TheBloke/Nous-Hermes-Llama2-AWQ served with vLLM

7.1K runs

joehoover/zephyr-7b-alpha

A high-performing language model trained to act as a helpful assistant

6.8K runs

meta/codellama-34b-python

A 34 billion parameter Llama tuned for coding with Python

6.1K runs

replicate/gpt-j-6b

A large language model by EleutherAI

6.1K runs

spuuntries/miqumaid-v1-70b-gguf

NeverSleep's MiquMaid v1 70B Miqu Finetune, GGUF Q3_K_M quantized by NeverSleep.

5.7K runs

lucataco/moondream1

(Research only) Moondream1 is a vision language model that performs on par with models twice its size

5.7K runs

nateraw/zephyr-7b-beta

Zephyr-7B-beta, an LLM trained to act as a helpful assistant.

5.6K runs

yorickvp/llava-v1.6-vicuna-7b

LLaVA v1.6: Large Language and Vision Assistant (Vicuna-7B)

5.1K runs

replicate/llama-13b-lora

Transformers implementation of the LLaMA 13B language model

4.9K runs

andreasjansson/llama-2-13b-chat-gguf

Llama-2 13B chat with support for grammars and jsonschema

4.9K runs

kcaverly/dolphin-2.7-mixtral-8x7b-gguf

Uncensored Mixtral-8x7b MOE model trained for chat with the Dolphin dataset

4.2K runs

kcaverly/neuralbeagle14-7b-gguf

NeuralBeagle14-7B is (probably) the best 7B model you can find!

4.1K runs

andreasjansson/codellama-7b-instruct-gguf

CodeLlama-7B-instruct with support for grammars and jsonschema

4.1K runs

meta/codellama-7b-python

A 7 billion parameter Llama tuned for coding with Python

3.6K runs

joehoover/sql-generator

3.5K runs

meta/codellama-13b-python

A 13 billion parameter Llama tuned for coding with Python

3.3K runs

anotherjesse/llava-lies

LLaVA injecting randomness into the image

2.9K runs

kcaverly/dolphin-2.6-mixtral-8x7b-gguf

Mixtral-8x7b MOE model trained for chat with the dolphin + samantha's empathy dataset

2.6K runs

01-ai/yi-6b-chat

The Yi series models are large language models trained from scratch by developers at 01.AI.

2.5K runs

organisciak/ocsai-llama2-7b

2.3K runs

kcaverly/openchat-3.5-1210-gguf

The "Overall Best Performing Open Source 7B Model" for Coding + Generalization or Mathematical Reasoning

2K runs

01-ai/yi-34b

The Yi series models are large language models trained from scratch by developers at 01.AI.

2K runs

replit/replit-code-v1-3b

Generate code with Replit's replit-code-v1-3b large language model

1.9K runs

kcaverly/deepseek-coder-33b-instruct-gguf

A quantized 33B parameter language model from Deepseek for SOTA repository level code completion

1.8K runs

nateraw/salmonn

SALMONN: Speech Audio Language Music Open Neural Network

1.7K runs

andreasjansson/llama-2-70b-chat-gguf

Llama-2 70B chat with support for grammars and jsonschema

1.6K runs

nomagick/qwen-14b-chat

Qwen-14B-Chat is a Transformer-based large language model, which is pretrained on a large volume of data, including web texts, books, codes, etc.

1.6K runs

01-ai/yi-34b-200k

The Yi series models are large language models trained from scratch by developers at 01.AI.

1.6K runs

niron1/openorca-platypus2-13b

OpenOrca-Platypus2-13B is a merge of garage-bAInd/Platypus2-13B and Open-Orca/OpenOrcaxOpenChat-Preview2-13B.

1.3K runs

mattt/orca-2-13b

1.3K runs

daanelson/flan-t5-large

A language model for tasks like classification, summarization, and more.

1.1K runs

anotherjesse/sdxl-recur

explore img2img zooming sdxl

1.1K runs

niron1/qwen-7b-chat

Qwen-7B is the 7B-parameter version of the large language model series, Qwen (abbr. Tongyi Qianwen), proposed by Aibaba Cloud. Qwen-7B`is a Transformer-based large language model, which is pretrained on a large volume of data, including web texts, books,

899 runs

nateraw/samsum-llama-2-13b

833 runs

moinnadeem/vllm-engine-llama-7b

663 runs

nateraw/causallm-14b

CausalLM/14B model with AWQ quantization. Perhaps better than all existing models < 70B, in most quantitative evaluations...

663 runs

charles-dyfis-net/llama-2-13b-hf--lmtp-8bit

617 runs

ruben-svensson/llama2-aqua-test1

605 runs

andreasjansson/llama-2-13b-gguf

Llama-2 13B with support for grammars and jsonschema

577 runs

papermoose/llama-pajama

546 runs

stability-ai/stablelm-base-alpha-7b

7B parameter base version of Stability AI's language model

532 runs

nomagick/qwen-vl-chat

Qwen-VL-Chat but with raw ChatML prompt interface and streaming

507 runs

andreasjansson/wizardcoder-python-34b-v1-gguf

WizardCoder-python-34B-v1.0 with support for grammars and jsonschema

464 runs

fofr/llama2-prompter

Llama2 13b base model fine-tuned on text to image prompts

463 runs

meta/codellama-70b-python

A 70 billion parameter Llama tuned for coding with Python

449 runs

fofr/star-trek-gpt-j-6b

gpt-j-6b trained on the Memory Alpha Star Trek Wiki

417 runs

replicate-internal/staging-llama-2-7b

412 runs

kcaverly/dolphin-2.6-mistral-7b-gguf

Mistral 7b v2 Fine Tuned on the Dolphin dataset

363 runs

andreasjansson/plasma

Generate plasma shader equations

346 runs

stability-ai/stablelm-base-alpha-3b

3B parameter base version of Stability AI's language model

343 runs

andreasjansson/codellama-34b-instruct-gguf

CodeLlama-34B-instruct with support for grammars and jsonschema

337 runs

nwhitehead/llama2-7b-chat-gptq

296 runs

antoinelyset/openhermes-2.5-mistral-7b-awq

281 runs

niron1/llama-2-7b-chat

LLAMA-2 7b chat version by Meta. Stream support. Unaltered prompt. Temperature working properly. Economical hardware.

252 runs

spuuntries/miqumaid-v2-2x70b-dpo-gguf

NeverSleep's MiquMaid v2 2x70B Miqu-Mixtral MoE DPO Finetune, GGUF Q2_K quantized by NeverSleep.

250 runs

nomagick/chatglm3-6b-32k

A 6B parameter open bilingual chat LLM (optimized for 8k+ context) | 开源双语对话语言模型

249 runs

meta/codellama-70b

A 70 billion parameter Llama tuned for coding and conversation

246 runs

peter65374/openbuddy-llemma-34b-gguf

This is a cog implementation of "openbuddy-llemma-34b" 4-bit quantization model.

245 runs

kcaverly/nous-capybara-34b-gguf

A SOTA Nous Research finetune of 200k Yi-34B fine tuned on the Capybara dataset.

234 runs

cbh123/dylan-lyrics

Llama 2 13B fine-tuned on Bob Dylan lyrics

231 runs

antoinelyset/openhermes-2-mistral-7b

Simple version of https://huggingface.co/teknium/OpenHermes-2-Mistral-7B

224 runs

kcaverly/deepseek-coder-6.7b-instruct

A ~7B parameter language model from Deepseek for SOTA repository level code completion

210 runs

nomagick/chatglm2-6b-int4

ChatGLM2-6B: An Open Bilingual Chat LLM | 开源双语对话语言模型 (int4)

206 runs

nateraw/axolotl-llama-2-7b-english-to-hinglish

201 runs

zeke/nyu-llama-2-7b-chat-training-test

A test model for fine-tuning Llama 2

188 runs

xrunda/med

175 runs

lucataco/tinyllama-1.1b-chat-v1.0

This is the chat model finetuned on top of TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T

162 runs

nateraw/sqlcoder-70b-alpha

161 runs

nateraw/stablecode-completion-alpha-3b-4k

154 runs

fofr/star-trek-adventure

153 runs

kcaverly/nexus-raven-v2-13b-gguf

A quantized 13B parameter language model from NexusFlow for SOTA zero-shot function calling

151 runs

lucataco/qwen1.5-72b

Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data

143 runs

fofr/neuromancer-13b

llama-13b-base fine-tuned on Neuromancer style

139 runs

lucataco/qwen1.5-14b

Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data

139 runs

m1guelpf/mario-gpt

Using language models to generate Super Mario Bros levels

137 runs

fofr/star-trek-flan

flan-t5-xl trained on the Memory Alpha Star Trek Wiki

129 runs

nateraw/samsum-llama-7b

llama-2-7b fine-tuned on the samsum dataset for dialogue summarization

129 runs

moinnadeem/fastervicuna_​13b

Re-implements LLaMa using a higher MFU implementation

119 runs

fofr/star-trek-llama

llama-7b trained on the Memory Alpha Star Trek Wiki

119 runs

kcaverly/nous-hermes-2-solar-10.7b-gguf

Nous Hermes 2 - SOLAR 10.7B is the flagship Nous Research model on the SOLAR 10.7B base model.

118 runs

titocosta/notus-7b-v1

Notus-7b-v1 model

117 runs

adirik/mamba-2.8b

Base version of Mamba 2.8B, a 2.8 billion parameter state space language model

115 runs

hamelsmu/honeycomb-2

Honeycomb NLQ Generator

97 runs

kcaverly/phind-codellama-34b-v2-gguf

A quantized 34B parameter language model from Phind for code completion

93 runs

rybens92/una-cybertron-7b-v2--lmtp-8bit

92 runs

nateraw/wizardcoder-python-34b-v1.0

88 runs

nateraw/llama-2-7b-paraphrase-v1

79 runs

nateraw/llama-2-7b-chat-hf

78 runs

crowdy/line-lang-3.6b

an implementation of 3.6b Japanese large language model

75 runs

deepseek-ai/deepseek-math-7b-instruct

Pushing the Limits of Mathematical Reasoning in Open Language Models - Instruct model

75 runs

tanzir11/merge

74 runs

moinnadeem/codellama-34b-instruct-vllm

72 runs

lucataco/wizardcoder-33b-v1.1-gguf

WizardCoder: Empowering Code Large Language Models with Evol-Instruct

71 runs

lucataco/phixtral-2x2_​8

phixtral-2x2_8 is the first Mixure of Experts (MoE) made with two microsoft/phi-2 models, inspired by the mistralai/Mixtral-8x7B-v0.1 architecture

70 runs

nateraw/defog-sqlcoder-7b-2

A capable large language model for natural language to SQL generation.

70 runs

cjwbw/gemma-7b-it

7B instruct version of Google’s Gemma model

67 runs

juanjaragavi/abby-llama-2-7b-chat

Abby is a stoic philosopher and a loving and caring mature woman.

66 runs

nateraw/codellama-7b-instruct-hf

62 runs

spuuntries/borealis-10.7b-dpo-gguf

Undi95's Borealis 10.7B Mistral DPO Finetune, GGUF Q5_K_M quantized by Undi95.

62 runs

deepseek-ai/deepseek-math-7b-base

Pushing the Limits of Mathematical Reasoning in Open Language Models - Base model

61 runs

nateraw/aidc-ai-business-marcoroni-13b

60 runs

cjwbw/gemma-7b

7B base version of Google’s Gemma model

54 runs

chigozienri/llava-birds

54 runs

replicate-internal/mixtral-8x7b-instruct-v0.1-pget

48 runs

zallesov/super-real-llama2

47 runs

lidarbtc/kollava-v1.5

korean version of llava-v1.5

47 runs

cbh123/samsum

46 runs

replicate/elixir-gen

Fine-tuned Llama 13b on Elixir docstrings (WIP)

45 runs

cbh123/homerbot

45 runs

technillogue/mixtral-instruct-nix

45 runs

titocosta/starling

Starling-LM-7B-alpha

43 runs

peter65374/openbuddy-mistral-7b

Openbuddy finetuned mistral-7b in GPTQ quantization in 4bits by TheBloke

39 runs

sruthiselvaraj/finetuned-llama2

35 runs

hamelsmu/honeycomb

Honeycomb NLQ Generator

33 runs

lucataco/qwen1.5-7b

Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data

33 runs

lucataco/olmo-7b

OLMo is a series of Open Language Models designed to enable the science of language models

32 runs

adirik/mamba-130m

Base version of Mamba 130M, a 130 million parameter state space language model

29 runs

seanoliver/bob-dylan-fun-tuning

Llama fine-tune-athon project training llama2 on bob dylan lyrics.

26 runs

adirik/mamba-2.8b-slimpj

Base version of Mamba 2.8B Slim Pyjama, a 2.8 billion parameter state space language model

26 runs

cjwbw/gemma-2b

2B base version of Google’s Gemma model

23 runs

fleshgordo/orni2-chat

20 runs

lucataco/qwen1.5-0.5b

Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data

20 runs

lucataco/qwen1.5-1.8b

Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data

18 runs

cjwbw/gemma-2b-it

2B instruct version of Google’s Gemma model

17 runs

nateraw/codellama-7b-instruct

17 runs

charles-dyfis-net/llama-2-7b-hf--lmtp-4bit

16 runs

nateraw/llama-2-7b-samsum

15 runs

adirik/mamba-1.4b

Base version of Mamba 1.4B, a 1.4 billion parameter state space language model

15 runs

juanjaragavi/abbot-llama-2-7b-chat

Abbot is brutally honest stoic philosopher. He is here to help the 'User' be their best self, no coddling.

14 runs

nateraw/gairmath-abel-7b

13 runs

msamogh/iiu-generator-llama2-7b-2

13 runs

nateraw/codellama-7b

12 runs

nateraw/codellama-34b

12 runs

charles-dyfis-net/llama-2-13b-hf--lmtp

11 runs

divyavanmahajan/my-pet-llama

11 runs

adirik/mamba-790m

Base version of Mamba 790M, a 790 million parameter state space language model

11 runs

replicate-internal/gemma-2b-it

2B instruct version of the Gemma model

10 runs

adirik/mamba-370m

Base version of Mamba 370M, a 370 million parameter state space language model

9 runs

charles-dyfis-net/llama-2-13b-hf--lmtp-4bit

7 runs

nateraw/codellama-13b

6 runs

lucataco/qwen1.5-4b

Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data

5 runs

nateraw/codellama-13b-instruct

2 runs