You're looking at a specific version of this model. Jump to the model overview.

edenartlab /sdxl-lora-trainer:b4a19aae

Input schema

The fields you can use to run this model with an API. If you don’t give a value for a field its default value will be used.

Field Type Default value Description
lora_training_urls
string
Training images for new LORA concept (can be image urls or a .zip file of images)
concept_mode
string
object
'face' / 'style' / 'object' (default)
sd_model_version
string
sdxl
'sdxl' / 'sd15'
name
string
unnamed
Name of new LORA concept
seed
integer
Random seed for reproducible training. Leave empty to use a random seed
resolution
integer
512
Square pixel resolution which your images will be resized to for training, recommended: 512 or 640
train_batch_size
integer
4
Batch size (per device) for training
max_train_steps
integer
400
Number of training steps.
token_warmup_steps
integer
50
Number of steps for token (textual_inversion) warmup.
checkpointing_steps
integer
10000
Number of steps between saving checkpoints. Set to very very high number to disable checkpointing, because you don't need intermediate checkpoints.
is_lora
boolean
True
Whether to use LoRA training. If set to False, will use full fine tuning
prodigy_d_coef
number
0.5
Multiplier for internal learning rate of Prodigy optimizer
ti_lr
number
0.001
Learning rate for training textual inversion embeddings. Don't alter unless you know what you're doing.
freeze_ti_after_completion_f
number
0.5
Fraction of training steps after which to freeze textual inversion embeddings
lora_rank
integer
12
Rank of LoRA embeddings for the unet.
text_encoder_lora_optimizer
string
adamw
Which optimizer to use for the text_encoder_lora. ['adamw', None] are supported right now (None will disable txt-lora training)
caption_model
string
gpt4-v
Which captioning model to use. ['gpt4-v', 'blip'] are supported right now
n_tokens
integer
2
How many new tokens to inject per concept
verbose
boolean
True
verbose output
debug
boolean
False
For debugging locally only (dont activate this on replicate)

Output schema

The shape of the response you’ll get when you run this model with an API.

Schema
{'items': {'properties': {'attributes': {'title': 'Attributes',
                                         'type': 'object'},
                          'files': {'default': [],
                                    'items': {'format': 'uri',
                                              'type': 'string'},
                                    'title': 'Files',
                                    'type': 'array'},
                          'isFinal': {'default': False,
                                      'title': 'Isfinal',
                                      'type': 'boolean'},
                          'name': {'title': 'Name', 'type': 'string'},
                          'progress': {'title': 'Progress', 'type': 'number'},
                          'thumbnails': {'default': [],
                                         'items': {'format': 'uri',
                                                   'type': 'string'},
                                         'title': 'Thumbnails',
                                         'type': 'array'}},
           'title': 'CogOutput',
           'type': 'object'},
 'title': 'Output',
 'type': 'array',
 'x-cog-array-type': 'iterator'}