edenartlab/sdxl-lora-trainer

LoRa trainer for both SDXL and SD15

Public
7.5K runs

Input

Run this model in Node.js with one line of code:

npx create-replicate --model=edenartlab/sdxl-lora-trainer
or set up a project from scratch
npm install replicate
Set the REPLICATE_API_TOKEN environment variable:
export REPLICATE_API_TOKEN=<paste-your-token-here>

Find your API token in your account settings.

Import and set up the client:
import Replicate from "replicate";

const replicate = new Replicate({
  auth: process.env.REPLICATE_API_TOKEN,
});

Run edenartlab/sdxl-lora-trainer using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.

const output = await replicate.run(
  "edenartlab/sdxl-lora-trainer:4767bababe6048535114863799de828c25ec5b935dab7f879d4fa29495118d22",
  {
    input: {
      name: "unnamed",
      ti_lr: 0.001,
      unet_lr: 0.0003,
      n_tokens: 3,
      lora_rank: 16,
      resolution: 512,
      concept_mode: "style",
      n_sample_imgs: 4,
      max_train_steps: 300,
      sd_model_version: "sdxl",
      train_batch_size: 4,
      checkpointing_steps: 10000,
      validation_img_size: 1024
    }
  }
);

console.log(output);

To learn more, take a look at the guide on getting started with Node.js.

Output

No output yet! Press "Submit" to start a prediction.

Run time and cost

This model runs on Nvidia L40S GPU hardware. We don't yet have enough runs of this model to provide performance information.

Readme

This trainer uses a single training script that is compatible with both SDXL and SD15.

The trainer has the following capabilities: - automatic image captioning using BLIP - automatic segmentation using CLIPseg - textual_inversion training of a new token to represent the concept - 3 training modes: “style” / “face” / “object” - Full finetuning or LoRa or Dora training modes are supported in the code - LoRa modules are possible for both unet and txt-encoders

The generated checkpoint files are compatible with ComfyUI and AUTO111