meta / llama-2-7b-chat

A 7 billion parameter language model from Meta, fine tuned for chat completions

Demo API Examples README Train Versions (13c3cdee)

If you haven’t yet trained a model on Replicate, we recommend you read one of the following guides.


Trainings for this model run on 8x Nvidia A40 (Large) GPU hardware, which costs $0.0058 per second.

Create a training

Install the Python library:

pip install replicate

Then, run this to create a training with meta/llama-2-7b-chat:13c3cdee as the base model:

import replicate

training = replicate.trainings.create(
    # set your inputs here

curl -s -X POST \
-d '{"destination": "{username}/<destination-model-name>", "input": {}}' \
  -H "Authorization: Token $REPLICATE_API_TOKEN" \

The API response will look like this:

  "id": "zz4ibbonubfz7carwiefibzgga",
  "version": "13c3cdee13ee059ab779f0291d29054dab00a47dad8261375654de5540165fb0",
  "status": "starting",
  "input": {
    "data": "..."
  "output": null,
  "error": null,
  "logs": null,
  "started_at": null,
  "created_at": "2023-03-28T21:47:58.566434Z",
  "completed_at": null

Note that before you can create a training, you’ll need to create a model and use its name as the value for the destination field.

Training inputs

Please see for more information about the model, licensing, and acceptable use.