mistralai / mistral-7b-v0.1

A 7 billion parameter language model from Mistral.

  • Public
  • 1.1M runs
  • GitHub
  • Paper
  • License

Input

Output

Pricing

This language model is priced by how many input tokens are sent as inputs and how many output tokens are generated.

Check out our docs for more information about how per-token pricing works on Replicate.

Readme

The Mistral-7B-v0.1 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters. Mistral-7B-v0.1 outperforms Llama 2 13B on all benchmarks we tested.

Model Architecture

Mistral-7B-v0.1 is a transformer model, with the following architecture choices:

  • Grouped-Query Attention
  • Sliding-Window Attention
  • Byte-fallback BPE tokenizer

For further information, see the Mistral-7B launch [blog post].(https://mistral.ai/news/announcing-mistral-7b/)