You're looking at a specific version of this model. Jump to the model overview.
edenartlab /sdxl-lora-trainer:36a11107
Input schema
The fields you can use to run this model with an API. If you don’t give a value for a field its default value will be used.
Field | Type | Default value | Description |
---|---|---|---|
name |
string
|
Name of new LORA concept
|
|
lora_training_urls |
string
|
Training images for new LORA concept (can be image urls or a .zip file of images)
|
|
mode |
string
|
concept
|
'face' / 'style' / 'concept' (default)
|
checkpoint |
string
(enum)
|
sdxl-v1.0
Options: sdxl-v1.0 |
Which Stable Diffusion checkpoint to use
|
seed |
integer
|
Random seed for reproducible training. Leave empty to use a random seed
|
|
resolution |
integer
|
960
|
Square pixel resolution which your images will be resized to for training recommended [768-1024]
|
train_batch_size |
integer
|
2
|
Batch size (per device) for training
|
num_train_epochs |
integer
|
10000
|
Number of epochs to loop through your training dataset
|
max_train_steps |
integer
|
800
|
Number of individual training steps. Takes precedence over num_train_epochs
|
checkpointing_steps |
integer
|
10000
|
Number of steps between saving checkpoints. Set to very very high number to disable checkpointing, because you don't need one.
|
is_lora |
boolean
|
True
|
Whether to use LoRA training. If set to False, will use Full fine tuning
|
unet_learning_rate |
number
|
0.000001
|
Learning rate for the U-Net (only used for full finetuning, not for LORA's). Recommended between `1e-6` to `1e-5`.
|
ti_lr |
number
|
0.001
|
Learning rate for training textual inversion embeddings. Don't alter unless you know what you're doing.
|
lora_lr |
number
|
0.0002
|
Learning rate for training LoRA matrices. Don't alter unless you know what you're doing.
|
ti_weight_decay |
number
|
0.0001
|
weight decay for textual inversion embeddings. Don't alter unless you know what you're doing.
|
lora_weight_decay |
number
|
0.0001
|
weight decay for LoRa. Don't alter unless you know what you're doing.
|
lora_rank |
integer
|
4
|
Rank of LoRA embeddings. For faces 4 is good, for complex concepts you can try 6 or 8
|
lr_scheduler |
string
(enum)
|
constant
Options: constant, linear |
Learning rate scheduler to use for training
|
lr_warmup_steps |
integer
|
50
|
Number of warmup steps for lr schedulers with warmups.
|
caption_prefix |
string
|
|
Prefix text prepended to automatic captioning. Must contain the 'TOK'. Example is 'a photo of TOK, '. If empty, chatgpt will take care of this automatically
|
left_right_flip_augmentation |
boolean
|
True
|
Add left-right flipped version of each img to the training data, recommended for most cases. If you are learning a face, you prob want to disable this
|
mask_target_prompts |
string
|
Prompt that describes most important part of the image, will be used for CLIP-segmentation. For example, if you are learning a person 'face' would be a good segmentation prompt
|
|
crop_based_on_salience |
boolean
|
True
|
If you want to crop the image to `target_size` based on the important parts of the image, set this to True. If you want to crop the image based on face detection, set this to False
|
use_face_detection_instead |
boolean
|
False
|
If you want to use face detection instead of CLIPSeg for masking. For face applications, we recommend using this option.
|
clipseg_temperature |
number
|
1
|
How blurry you want the CLIPSeg mask to be. We recommend this value be something between `0.5` to `1.0`. If you want to have more sharp mask (but thus more errorful), you can decrease this value.
|
verbose |
boolean
|
True
|
verbose output
|
run_name |
string
|
1693871721
|
Subdirectory where all files will be saved
|
debug |
boolean
|
False
|
for debugging locally only (dont activate this on replicate)
|
hard_pivot |
boolean
|
True
|
Use hard freeze for ti_lr. If set to False, will use soft transition of learning rates
|
off_ratio_power |
number
|
0.1
|
How strongly to correct the embedding std vs the avg-std (0=off, 0.05=weak, 0.1=standard)
|
Output schema
The shape of the response you’ll get when you run this model with an API.
Schema
{'items': {'properties': {'attributes': {'title': 'Attributes',
'type': 'object'},
'files': {'items': {'format': 'uri',
'type': 'string'},
'title': 'Files',
'type': 'array'},
'isFinal': {'default': False,
'title': 'Isfinal',
'type': 'boolean'},
'name': {'title': 'Name', 'type': 'string'},
'progress': {'title': 'Progress', 'type': 'number'},
'thumbnails': {'default': [],
'items': {'format': 'uri',
'type': 'string'},
'title': 'Thumbnails',
'type': 'array'}},
'required': ['files'],
'title': 'CogOutput',
'type': 'object'},
'title': 'Output',
'type': 'array',
'x-cog-array-type': 'iterator'}