mistralai / mixtral-8x7b-instruct-v0.1

The Mixtral-8x7B-instruct-v0.1 Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts tuned to be a helpful assistant.

  • Public
  • 12.8M runs
  • GitHub
  • License

Pricing

This language model is priced by how many input tokens are sent as inputs and how many output tokens are generated.

Check out our docs for more information about how per-token pricing works on Replicate.

Readme

The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mixtral-8x7B outperforms Llama 2 70B on most benchmarks we tested.

For full details of this model please read our release blog post or the model card on Hugging Face