Collections

Train a language model

These large language models can be fine-tuned for custom tasks using the Replicate training API.

Key capabilities:

  • Text generation - Tune models like LLaMA and GPT for specific domains and text styles.
  • Question answering - Customize models to answer domain-specific questions.
  • Text classification - Train models for text categorization applications.
  • Summarization - Adapt models to summarize various text genres.
  • Grammar correction - Improve grammaticality and fluency for target use cases.

Our Pick: LLaMA 7B Chat

For most people, we recommend fine-tuning the LLaMA 7B Chat model. At 7 billion parameters, it provides an excellent balance of performance and cost-effectiveness. The -chat version has been instruction tuned, allowing it to more easily adapt to new tasks with less data and training time.

LLaMA 7B Chat is by far the most popular fine-tuning model on Replicate, with over 2 million runs. This widespread adoption demonstrates its versatility and effectiveness across a variety of real-world applications. You can feel confident choosing a model with such a strong track record.

Upgrade Picks: LLaMA 13B Chat and LLaMA 70B Chat

If you need maximum performance and knowledge for demanding applications, consider the larger 13 billion and 70 billion parameter chat models. The increased scale provides a boost in capability, but comes with higher latency and cost per request. These are great choices if you have the budget and value the absolute best results.

The base LLaMA models are also available, but we only recommend these for advanced users doing significant customization. The -chat versions will be easier and faster to fine-tune for most applications.