findix / sd-scripts

Training LoRA by sd-scripts

  • Public
  • 20 runs
  • GitHub
Iterate in playground

Run findix/sd-scripts with an API

Use one of our client libraries to get started quickly. Clicking on a library will take you to the Playground tab where you can tweak different inputs, see the results, and copy the corresponding code to use in your own project.

Input schema

The fields you can use to run this model with an API. If you don't give a value for a field its default value will be used.

Field Type Default value Description
pretrained_model_name_or_path
string
CompVis/stable-diffusion-v1-4
base model name or path | 底模名称或路径
train_data_zip
string
train dataset zip file | 训练数据集zip压缩包
network_weights
string
pretrained weights for LoRA network | 若需要从已有的 LoRA 模型上继续训练,请上传文件
training_comment
string
this LoRA model credit from replicate-sd-scripts
training_comment | 训练介绍,可以写作者名或者使用触发关键词
output_name
string
output model name | 模型保存名称
save_model_as
string (enum)
safetensors

Options:

ckpt, pt, safetensors

model save ext | 模型保存格式 ckpt, pt, safetensors
resolution
string
512
image resolution must be 'size' or 'width,height'. 图片分辨率,正方形边长 或 宽,高。支持非正方形,但必须是 64 倍数
batch_size
integer
1

Min: 1

batch size 一次性训练图片批处理数量,根据显卡质量对应调高
max_train_epoches
integer
10

Min: 1

max train epoches | 最大训练 epoch
save_every_n_epochs
integer
2

Min: 1

save every n epochs | 每 N 个 epoch 保存一次
network_dim
integer
32

Min: 1

network dim | 常用 4~128,不是越大越好
network_alpha
integer
32

Min: 1

network alpha | 常用与 network_dim 相同的值或者采用较小的值,如 network_dim的一半 防止下溢。默认值为 1,使用较小的 alpha 需要提升学习率
train_unet_only
boolean
False
train U-Net only | 仅训练 U-Net,开启这个会牺牲效果大幅减少显存使用。6G显存可以开启
train_text_encoder_only
boolean
False
train Text Encoder only | 仅训练 文本编码器
seed
integer
1337

Min: 1

reproducable seed | 设置跑测试用的种子,输入一个prompt和这个种子大概率得到训练图。可以用来试触发关键词
noise_offset
number
0

Max: 1

noise offset | 在训练中添加噪声偏移来改良生成非常暗或者非常亮的图像,如果启用,推荐参数为 0.1
keep_tokens
integer
0
keep heading N tokens when shuffling caption tokens | 在随机打乱 tokens 时,保留前 N 个不变
learning_rate
number
0.00006
Learning rate | 学习率
unet_lr
number
0.00006
UNet learning rate | UNet 学习率
text_encoder_lr
number
0.000007
Text Encoder learning rate | Text Encoder 学习率
lr_scheduler
string (enum)
cosine_with_restarts

Options:

linear, cosine, cosine_with_restarts, polynomial, constant, constant_with_warmup

"linear", "cosine", "cosine_with_restarts", "polynomial", "constant", "constant_with_warmup" | PyTorch自带6种动态学习率函数 constant,常量不变, constant_with_warmup 线性增加后保持常量不变, linear 线性增加线性减少, polynomial 线性增加后平滑衰减, cosine 余弦波曲线, cosine_with_restarts 余弦波硬重启,瞬间最大值。 推荐默认cosine_with_restarts或者polynomial,配合输出多个epoch结果更玄学
lr_warmup_steps
integer
0
warmup steps | 仅在 lr_scheduler 为 constant_with_warmup 时需要填写这个值
lr_scheduler_num_cycles
integer
1

Min: 1

cosine_with_restarts restart cycles | 余弦退火重启次数,仅在 lr_scheduler 为 cosine_with_restarts 时起效
min_bucket_reso
integer
256

Min: 1

arb min resolution | arb 最小分辨率
max_bucket_reso
integer
1024

Min: 1

arb max resolution | arb 最大分辨率
persistent_data_loader_workers
boolean
True
makes workers persistent, further reduces/eliminates the lag in between epochs. however it may increase memory usage | 跑的更快,吃内存。大概能提速2.5倍,容易爆内存,保留加载训练集的worker,减少每个 epoch 之间的停顿
clip_skip
integer
2
clip skip | 玄学 一般用 2
optimizer_type
string (enum)
Lion

Options:

adaFactor, AdamW, AdamW8bit, Lion, SGDNesterov, SGDNesterov8bit, DAdaptation

优化器,"adaFactor","AdamW","AdamW8bit","Lion","SGDNesterov","SGDNesterov8bit","DAdaptation", 推荐 新优化器Lion。推荐学习率unetlr=lr=6e-5,tenclr=7e-6
network_module
string (enum)
networks.lora

Options:

networks.lora

Network module

Output schema

The shape of the response you’ll get when you run this model with an API.

Schema
{
  "type": "string",
  "title": "Output",
  "format": "uri"
}