meta / meta-llama-3.1-405b-instruct

Meta's flagship 405 billion parameter language model, fine-tuned for chat completions

  • Public
  • 2M runs
  • GitHub
  • License

Input

Output

Pricing

This language model is priced by how many input tokens are sent as inputs and how many output tokens are generated.

Check out our docs for more information about how per-token pricing works on Replicate.

Readme

Meta Llama 3.1 405B Instruct is an instruction-tuned generative language model developed by Meta. It is optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks. Supported languages are English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai.

The model is trained on over 15 trillion tokens from a mix of publicly available online data, consisting of multilingual text and code. The cutoff date in the dataset is December 2023. The model was trained for 30.84 million GPU hours.

For additional details, please refer to the official model card: https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md