You're looking at a specific version of this model. Jump to the model overview.
edenartlab /sdxl-lora-trainer:7c13eaaa
Input schema
The fields you can use to run this model with an API. If you don’t give a value for a field its default value will be used.
Field | Type | Default value | Description |
---|---|---|---|
name |
string
|
Name of new LORA concept
|
|
lora_training_urls |
string
|
Training images for new LORA concept (can be image urls or a .zip file of images)
|
|
concept_mode |
string
|
object
|
'face' / 'style' / 'object' (default)
|
seed |
integer
|
Random seed for reproducible training. Leave empty to use a random seed
|
|
resolution |
integer
|
1024
|
Square pixel resolution which your images will be resized to for training recommended [768-1024]
|
train_batch_size |
integer
|
2
|
Batch size (per device) for training
|
num_train_epochs |
integer
|
10000
|
Number of epochs to loop through your training dataset
|
max_train_steps |
integer
|
1200
|
Number of individual training steps. Takes precedence over num_train_epochs
|
checkpointing_steps |
integer
|
10000
|
Number of steps between saving checkpoints. Set to very very high number to disable checkpointing, because you don't need one.
|
is_lora |
boolean
|
True
|
Whether to use LoRA training. If set to False, will use Full fine tuning
|
prodigy_d_coef |
number
|
0.5
|
Multiplier for internal learning rate of Prodigy optimizer
|
ti_lr |
number
|
0.002
|
Learning rate for training textual inversion embeddings. Don't alter unless you know what you're doing.
|
ti_weight_decay |
number
|
0.0001
|
weight decay for textual inversion embeddings. Don't alter unless you know what you're doing.
|
lora_weight_decay |
number
|
0.005
|
weight decay for lora parameters. Don't alter unless you know what you're doing.
|
lora_rank |
integer
|
6
|
Rank of LoRA embeddings. For faces 5 is good, for complex concepts / styles you can try 8 or 12
|
caption_prefix |
string
|
|
Prefix text prepended to automatic captioning. Must contain the 'TOK'. Example is 'a photo of TOK, '. If empty, chatgpt will take care of this automatically
|
left_right_flip_augmentation |
boolean
|
True
|
Add left-right flipped version of each img to the training data, recommended for most cases. If you are learning a face, you prob want to disable this
|
augment_imgs_up_to_n |
integer
|
8
|
Apply data augmentation until there are n training samples (0 disables augmentation completely)
|
mask_target_prompts |
string
|
Prompt that describes most important part of the image, will be used for CLIP-segmentation. For example, if you are learning a person 'face' would be a good segmentation prompt
|
|
crop_based_on_salience |
boolean
|
True
|
If you want to crop the image to `target_size` based on the important parts of the image, set this to True. If you want to crop the image based on face detection, set this to False
|
use_face_detection_instead |
boolean
|
False
|
If you want to use face detection instead of CLIPSeg for masking. For face applications, we recommend using this option.
|
clipseg_temperature |
number
|
0.75
|
How blurry you want the CLIPSeg mask to be. We recommend this value be something between `0.5` to `1.0`. If you want to have more sharp mask (but thus more errorful), you can decrease this value.
|
verbose |
boolean
|
True
|
verbose output
|
run_name |
string
|
1700699365
|
Subdirectory where all files will be saved
|
debug |
boolean
|
False
|
for debugging locally only (dont activate this on replicate)
|
hard_pivot |
boolean
|
False
|
Use hard freeze for ti_lr. If set to False, will use soft transition of learning rates
|
off_ratio_power |
number
|
0.1
|
How strongly to correct the embedding std vs the avg-std (0=off, 0.05=weak, 0.1=standard)
|
Output schema
The shape of the response you’ll get when you run this model with an API.
Schema
{'items': {'properties': {'attributes': {'title': 'Attributes',
'type': 'object'},
'files': {'items': {'format': 'uri',
'type': 'string'},
'title': 'Files',
'type': 'array'},
'isFinal': {'default': False,
'title': 'Isfinal',
'type': 'boolean'},
'name': {'title': 'Name', 'type': 'string'},
'progress': {'title': 'Progress', 'type': 'number'},
'thumbnails': {'default': [],
'items': {'format': 'uri',
'type': 'string'},
'title': 'Thumbnails',
'type': 'array'}},
'required': ['files'],
'title': 'CogOutput',
'type': 'object'},
'title': 'Output',
'type': 'array',
'x-cog-array-type': 'iterator'}