Guides

What is fine-tuning?


Fine-tuning is a process of taking a pre-trained model (sometimes called a foundation model) and training it with your own data to create a new model that is better suited to a specific task. You can fine-tune image models like SDXL on your own images to create a new version of the model that can generate images of a particular person, object, or style.

With Replicate, you can fine-tune and run your own image models in the cloud without having to set up any GPUs.

Fine-tuning image models

You can train an image model to generate images of:

Use these guides to get started with fine-tuning your own image models:

Training with Cog

If you're building and pushing your own public or private models using Cog, you can update your model to be fine-tuneable using Cog's experimental training API.

This allows you to define a fine-tuning interface for an existing Cog model, so users of the model can bring their own training data to create derivative fune-tuned models. This is the same API used by open-source models on Replicate like SDXL and Llama 2. See the SDXL GitHub repo or Llama 2 GitHub repo for reference implementations.

Add fine-tuning by creating a train method:

from cog import Input, Path
 
def train(
    train_data: Path = Input(description="HTTPS URL of a file containg training data"),
    learning_rate: float = Input(description="learning rate, for learning!", default=1e-4, ge=0),
    seed: int = Input(description="random seed to use for training", default=None)
) -> str:
  return "hello, weights"

To learn more, check out Cog's training interface reference docs