What is fine-tuning?
Fine-tuning is a process of taking a pre-trained model (sometimes called a foundation model) and training it with your own data to create a new model that is better suited to a specific task. You can fine-tune image models like SDXL on your own images to create a new version of the model that can generate images of a particular person, object, or style. You can also fine-tune language models like Llama 2 to make them better at a particular task, like answering questions or generating text in a specific style.
With Replicate, you can fine-tune and run your own image models and language models in the cloud without having to set up any GPUs.
You can train a language model to classify text, answer questions, be a support chatbot, or generate text in a particular style. These things are sometimes possible by creating prompts, but you can only pass a limited amount of data in the prompt. When you have a large amount of data, fine-tuning is the best approach to get higher quality results from your model.
Use these guides to get started with fine-tuning your own language models:
- Fine-tune Llama 2 on Replicate - A crash course in fine-tuning your own Llama model
- Fine-tune a language model - An in-depth guide with details about preparing training data, training times, costs, etc.
You can train an image model to generate images of:
- a particular person, like our colleague Zeke
- an object, like the Apple Vision Pro
- a style, like the Barbie movie
Use these guides to get started with fine-tuning your own image models:
- Fine-tune SDXL with your own images. Create your own models fine-tuned on faces or styles using the latest version of Stable Diffusion.
- Train and deploy a DreamBooth model. We recommend DreamBooth for generating images of people.
- LoRA: A faster way to fine-tune Stable Diffusion. LoRA is faster and cheaper than DreamBooth. It’s better at styles, but worse at faces.