You're looking at a specific version of this model. Jump to the model overview.

zhouzhengjun /lora_train:490c2d30

Input schema

The fields you can use to run this model with an API. If you don’t give a value for a field its default value will be used.

Field Type Default value Description
instance_data
string
A ZIP file containing your training images (JPG, PNG, etc. size not restricted). These images contain your 'subject' that you want the trained model to embed in the output domain for later generating customized scenes beyond the training images. For best results, use images without noise or unrelated objects in the background.
seed
integer
1337
A seed for reproducible training
resolution
integer
512
The resolution for input images. All the images in the train/validation dataset will be resized to this resolution.
train_text_encoder
boolean
True
Whether to train the text encoder
train_batch_size
integer
1
Batch size (per device) for the training dataloader.
gradient_accumulation_steps
integer
4
Number of updates steps to accumulate before performing a backward/update pass.
gradient_checkpointing
boolean
False
Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.
scale_lr
boolean
True
Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.
lr_scheduler
string (enum)
constant

Options:

linear, cosine, cosine_with_restarts, polynomial, constant, constant_with_warmup

The scheduler type to use
lr_warmup_steps
integer
0
Number of steps for the warmup in the lr scheduler.
clip_ti_decay
boolean
True
Whether or not to perform Bayesian Learning Rule on norm of the CLIP latent.
color_jitter
boolean
True
Whether or not to use color jitter at augmentation.
continue_inversion
boolean
False
Whether or not to continue inversion.
continue_inversion_lr
number
0.0001
The learning rate for continuing an inversion.
initializer_tokens
string
The tokens to use for the initializer. If not provided, will randomly initialize from gaussian N(0,0.017^2)
learning_rate_text
number
0.00001
The learning rate for the text encoder.
learning_rate_ti
number
0.0005
The learning rate for the TI.
learning_rate_unet
number
0.0001
The learning rate for the unet.
lora_rank
integer
4
Rank of the LoRA. Larger it is, more likely to capture fidelity but less likely to be editable. Larger rank will make the end result larger.
lora_dropout_p
number
0.1
Dropout for the LoRA layer. Reference LoRA paper for more details.
lora_scale
number
1
Scaling parameter at the end of the LoRA layer.
lr_scheduler_lora
string (enum)
constant

Options:

linear, cosine, cosine_with_restarts, polynomial, constant, constant_with_warmup

The scheduler type to use
lr_warmup_steps_lora
integer
0
Number of steps for the warmup in the lr scheduler.
max_train_steps_ti
integer
500
The maximum number of training steps for the TI.
max_train_steps_tuning
integer
1000
The maximum number of training steps for the tuning.
placeholder_token_at_data
string
If this value is provided as 'X|Y', it will transform target word X into Y at caption. You are required to provide caption as filename (not regarding extension), and Y has to contain placeholder token below. You are also required to set `None` for `use_template` argument to use this feature.
placeholder_tokens
string
<s1>|<s2>
The placeholder tokens to use for the initializer. If not provided, will use the first tokens of the data.
use_face_segmentation_condition
boolean
False
Whether or not to use the face segmentation condition.
use_template
string (enum)
object

Options:

object, style, none

The template to use for the inversion.
weight_decay_lora
number
0.001
The weight decay for the LORA loss.
weight_decay_ti
number
0
The weight decay for the TI.

Output schema

The shape of the response you’ll get when you run this model with an API.

Schema
{'format': 'uri', 'title': 'Output', 'type': 'string'}