You're looking at a specific version of this model. Jump to the model overview.

edenartlab /sdxl-lora-trainer:2d65dc0d

Input

*string
Shift + Return to add a new line

Training images for new LORA concept (can be image urls or a .zip file of images)

string
Shift + Return to add a new line

'face' / 'style' / 'object' (default)

Default: "style"

string
Shift + Return to add a new line

'sdxl' / 'sd15'

Default: "sdxl"

string
Shift + Return to add a new line

Name of new LORA concept

Default: "unnamed"

integer

Random seed for reproducible training. Leave empty to use a random seed

integer

Square pixel resolution which your images will be resized to for training, recommended: 512 or 640

Default: 512

integer

Batch size (per device) for training

Default: 4

integer

Number of training steps.

Default: 400

integer

Number of steps for token (textual_inversion) warmup.

Default: 0

integer

Number of steps between saving checkpoints. Set to very very high number to disable checkpointing, because you don't need intermediate checkpoints.

Default: 10000

number

final learning rate of unet (after warmup)

Default: 0.001

number

Learning rate for training textual inversion embeddings. Don't alter unless you know what you're doing.

Default: 0.001

number

Fraction of training steps after which to freeze textual inversion embeddings

Default: 1

integer

Rank of LoRA embeddings for the unet.

Default: 16

string
Shift + Return to add a new line

Which captioning model to use. ['gpt4-v', 'blip'] are supported right now

Default: "blip"

integer

How many new tokens to inject per concept

Default: 2

boolean

verbose output

Default: true

boolean

For debugging locally only (dont activate this on replicate)

Default: false

Output

No output yet! Press "Submit" to start a prediction.