findix / sd-scripts
Training LoRA by sd-scripts
- Public
- 20 runs
- GitHub
Run findix/sd-scripts with an API
Use one of our client libraries to get started quickly. Clicking on a library will take you to the Playground tab where you can tweak different inputs, see the results, and copy the corresponding code to use in your own project.
Input schema
The fields you can use to run this model with an API. If you don't give a value for a field its default value will be used.
Field | Type | Default value | Description |
---|---|---|---|
pretrained_model_name_or_path |
string
|
CompVis/stable-diffusion-v1-4
|
base model name or path | 底模名称或路径
|
train_data_zip |
string
|
train dataset zip file | 训练数据集zip压缩包
|
|
network_weights |
string
|
pretrained weights for LoRA network | 若需要从已有的 LoRA 模型上继续训练,请上传文件
|
|
training_comment |
string
|
this LoRA model credit from replicate-sd-scripts
|
training_comment | 训练介绍,可以写作者名或者使用触发关键词
|
output_name |
string
|
output model name | 模型保存名称
|
|
save_model_as |
string
(enum)
|
safetensors
Options: ckpt, pt, safetensors |
model save ext | 模型保存格式 ckpt, pt, safetensors
|
resolution |
string
|
512
|
image resolution must be 'size' or 'width,height'. 图片分辨率,正方形边长 或 宽,高。支持非正方形,但必须是 64 倍数
|
batch_size |
integer
|
1
Min: 1 |
batch size 一次性训练图片批处理数量,根据显卡质量对应调高
|
max_train_epoches |
integer
|
10
Min: 1 |
max train epoches | 最大训练 epoch
|
save_every_n_epochs |
integer
|
2
Min: 1 |
save every n epochs | 每 N 个 epoch 保存一次
|
network_dim |
integer
|
32
Min: 1 |
network dim | 常用 4~128,不是越大越好
|
network_alpha |
integer
|
32
Min: 1 |
network alpha | 常用与 network_dim 相同的值或者采用较小的值,如 network_dim的一半 防止下溢。默认值为 1,使用较小的 alpha 需要提升学习率
|
train_unet_only |
boolean
|
False
|
train U-Net only | 仅训练 U-Net,开启这个会牺牲效果大幅减少显存使用。6G显存可以开启
|
train_text_encoder_only |
boolean
|
False
|
train Text Encoder only | 仅训练 文本编码器
|
seed |
integer
|
1337
Min: 1 |
reproducable seed | 设置跑测试用的种子,输入一个prompt和这个种子大概率得到训练图。可以用来试触发关键词
|
noise_offset |
number
|
0
Max: 1 |
noise offset | 在训练中添加噪声偏移来改良生成非常暗或者非常亮的图像,如果启用,推荐参数为 0.1
|
keep_tokens |
integer
|
0
|
keep heading N tokens when shuffling caption tokens | 在随机打乱 tokens 时,保留前 N 个不变
|
learning_rate |
number
|
0.00006
|
Learning rate | 学习率
|
unet_lr |
number
|
0.00006
|
UNet learning rate | UNet 学习率
|
text_encoder_lr |
number
|
0.000007
|
Text Encoder learning rate | Text Encoder 学习率
|
lr_scheduler |
string
(enum)
|
cosine_with_restarts
Options: linear, cosine, cosine_with_restarts, polynomial, constant, constant_with_warmup |
"linear", "cosine", "cosine_with_restarts", "polynomial", "constant", "constant_with_warmup" | PyTorch自带6种动态学习率函数
constant,常量不变, constant_with_warmup 线性增加后保持常量不变, linear 线性增加线性减少, polynomial 线性增加后平滑衰减, cosine 余弦波曲线, cosine_with_restarts 余弦波硬重启,瞬间最大值。
推荐默认cosine_with_restarts或者polynomial,配合输出多个epoch结果更玄学
|
lr_warmup_steps |
integer
|
0
|
warmup steps | 仅在 lr_scheduler 为 constant_with_warmup 时需要填写这个值
|
lr_scheduler_num_cycles |
integer
|
1
Min: 1 |
cosine_with_restarts restart cycles | 余弦退火重启次数,仅在 lr_scheduler 为 cosine_with_restarts 时起效
|
min_bucket_reso |
integer
|
256
Min: 1 |
arb min resolution | arb 最小分辨率
|
max_bucket_reso |
integer
|
1024
Min: 1 |
arb max resolution | arb 最大分辨率
|
persistent_data_loader_workers |
boolean
|
True
|
makes workers persistent, further reduces/eliminates the lag in between epochs. however it may increase memory usage | 跑的更快,吃内存。大概能提速2.5倍,容易爆内存,保留加载训练集的worker,减少每个 epoch 之间的停顿
|
clip_skip |
integer
|
2
|
clip skip | 玄学 一般用 2
|
optimizer_type |
string
(enum)
|
Lion
Options: adaFactor, AdamW, AdamW8bit, Lion, SGDNesterov, SGDNesterov8bit, DAdaptation |
优化器,"adaFactor","AdamW","AdamW8bit","Lion","SGDNesterov","SGDNesterov8bit","DAdaptation", 推荐 新优化器Lion。推荐学习率unetlr=lr=6e-5,tenclr=7e-6
|
network_module |
string
(enum)
|
networks.lora
Options: networks.lora |
Network module
|
{
"type": "object",
"title": "Input",
"required": [
"train_data_zip"
],
"properties": {
"seed": {
"type": "integer",
"title": "Seed",
"default": 1337,
"minimum": 1,
"x-order": 14,
"description": "reproducable seed | \u8bbe\u7f6e\u8dd1\u6d4b\u8bd5\u7528\u7684\u79cd\u5b50\uff0c\u8f93\u5165\u4e00\u4e2aprompt\u548c\u8fd9\u4e2a\u79cd\u5b50\u5927\u6982\u7387\u5f97\u5230\u8bad\u7ec3\u56fe\u3002\u53ef\u4ee5\u7528\u6765\u8bd5\u89e6\u53d1\u5173\u952e\u8bcd"
},
"unet_lr": {
"type": "number",
"title": "Unet Lr",
"default": 6e-05,
"minimum": 0,
"x-order": 18,
"description": "UNet learning rate | UNet \u5b66\u4e60\u7387"
},
"clip_skip": {
"type": "integer",
"title": "Clip Skip",
"default": 2,
"minimum": 0,
"x-order": 26,
"description": "clip skip | \u7384\u5b66 \u4e00\u822c\u7528 2"
},
"batch_size": {
"type": "integer",
"title": "Batch Size",
"default": 1,
"minimum": 1,
"x-order": 7,
"description": "batch size \u4e00\u6b21\u6027\u8bad\u7ec3\u56fe\u7247\u6279\u5904\u7406\u6570\u91cf\uff0c\u6839\u636e\u663e\u5361\u8d28\u91cf\u5bf9\u5e94\u8c03\u9ad8"
},
"resolution": {
"type": "string",
"title": "Resolution",
"default": "512",
"x-order": 6,
"description": "image resolution must be 'size' or 'width,height'. \u56fe\u7247\u5206\u8fa8\u7387\uff0c\u6b63\u65b9\u5f62\u8fb9\u957f \u6216 \u5bbd,\u9ad8\u3002\u652f\u6301\u975e\u6b63\u65b9\u5f62\uff0c\u4f46\u5fc5\u987b\u662f 64 \u500d\u6570"
},
"keep_tokens": {
"type": "integer",
"title": "Keep Tokens",
"default": 0,
"minimum": 0,
"x-order": 16,
"description": "keep heading N tokens when shuffling caption tokens | \u5728\u968f\u673a\u6253\u4e71 tokens \u65f6\uff0c\u4fdd\u7559\u524d N \u4e2a\u4e0d\u53d8"
},
"network_dim": {
"type": "integer",
"title": "Network Dim",
"default": 32,
"minimum": 1,
"x-order": 10,
"description": "network dim | \u5e38\u7528 4~128\uff0c\u4e0d\u662f\u8d8a\u5927\u8d8a\u597d"
},
"output_name": {
"type": "string",
"title": "Output Name",
"x-order": 4,
"description": "output model name | \u6a21\u578b\u4fdd\u5b58\u540d\u79f0"
},
"lr_scheduler": {
"enum": [
"linear",
"cosine",
"cosine_with_restarts",
"polynomial",
"constant",
"constant_with_warmup"
],
"type": "string",
"title": "lr_scheduler",
"description": "\"linear\", \"cosine\", \"cosine_with_restarts\", \"polynomial\", \"constant\", \"constant_with_warmup\" | PyTorch\u81ea\u5e266\u79cd\u52a8\u6001\u5b66\u4e60\u7387\u51fd\u6570\nconstant\uff0c\u5e38\u91cf\u4e0d\u53d8, constant_with_warmup \u7ebf\u6027\u589e\u52a0\u540e\u4fdd\u6301\u5e38\u91cf\u4e0d\u53d8, linear \u7ebf\u6027\u589e\u52a0\u7ebf\u6027\u51cf\u5c11, polynomial \u7ebf\u6027\u589e\u52a0\u540e\u5e73\u6ed1\u8870\u51cf, cosine \u4f59\u5f26\u6ce2\u66f2\u7ebf, cosine_with_restarts \u4f59\u5f26\u6ce2\u786c\u91cd\u542f\uff0c\u77ac\u95f4\u6700\u5927\u503c\u3002\n\u63a8\u8350\u9ed8\u8ba4cosine_with_restarts\u6216\u8005polynomial\uff0c\u914d\u5408\u8f93\u51fa\u591a\u4e2aepoch\u7ed3\u679c\u66f4\u7384\u5b66",
"default": "cosine_with_restarts",
"x-order": 20
},
"noise_offset": {
"type": "number",
"title": "Noise Offset",
"default": 0,
"maximum": 1,
"minimum": 0,
"x-order": 15,
"description": "noise offset | \u5728\u8bad\u7ec3\u4e2d\u6dfb\u52a0\u566a\u58f0\u504f\u79fb\u6765\u6539\u826f\u751f\u6210\u975e\u5e38\u6697\u6216\u8005\u975e\u5e38\u4eae\u7684\u56fe\u50cf\uff0c\u5982\u679c\u542f\u7528\uff0c\u63a8\u8350\u53c2\u6570\u4e3a 0.1"
},
"learning_rate": {
"type": "number",
"title": "Learning Rate",
"default": 6e-05,
"minimum": 0,
"x-order": 17,
"description": "Learning rate | \u5b66\u4e60\u7387"
},
"network_alpha": {
"type": "integer",
"title": "Network Alpha",
"default": 32,
"minimum": 1,
"x-order": 11,
"description": "network alpha | \u5e38\u7528\u4e0e network_dim \u76f8\u540c\u7684\u503c\u6216\u8005\u91c7\u7528\u8f83\u5c0f\u7684\u503c\uff0c\u5982 network_dim\u7684\u4e00\u534a \u9632\u6b62\u4e0b\u6ea2\u3002\u9ed8\u8ba4\u503c\u4e3a 1\uff0c\u4f7f\u7528\u8f83\u5c0f\u7684 alpha \u9700\u8981\u63d0\u5347\u5b66\u4e60\u7387"
},
"save_model_as": {
"enum": [
"ckpt",
"pt",
"safetensors"
],
"type": "string",
"title": "save_model_as",
"description": "model save ext | \u6a21\u578b\u4fdd\u5b58\u683c\u5f0f ckpt, pt, safetensors",
"default": "safetensors",
"x-order": 5
},
"network_module": {
"enum": [
"networks.lora"
],
"type": "string",
"title": "network_module",
"description": "Network module",
"default": "networks.lora",
"x-order": 28
},
"optimizer_type": {
"enum": [
"adaFactor",
"AdamW",
"AdamW8bit",
"Lion",
"SGDNesterov",
"SGDNesterov8bit",
"DAdaptation"
],
"type": "string",
"title": "optimizer_type",
"description": "\u4f18\u5316\u5668\uff0c\"adaFactor\",\"AdamW\",\"AdamW8bit\",\"Lion\",\"SGDNesterov\",\"SGDNesterov8bit\",\"DAdaptation\", \u63a8\u8350 \u65b0\u4f18\u5316\u5668Lion\u3002\u63a8\u8350\u5b66\u4e60\u7387unetlr=lr=6e-5,tenclr=7e-6",
"default": "Lion",
"x-order": 27
},
"train_data_zip": {
"type": "string",
"title": "Train Data Zip",
"format": "uri",
"x-order": 1,
"description": "train dataset zip file | \u8bad\u7ec3\u6570\u636e\u96c6zip\u538b\u7f29\u5305"
},
"lr_warmup_steps": {
"type": "integer",
"title": "Lr Warmup Steps",
"default": 0,
"minimum": 0,
"x-order": 21,
"description": "warmup steps | \u4ec5\u5728 lr_scheduler \u4e3a constant_with_warmup \u65f6\u9700\u8981\u586b\u5199\u8fd9\u4e2a\u503c"
},
"max_bucket_reso": {
"type": "integer",
"title": "Max Bucket Reso",
"default": 1024,
"minimum": 1,
"x-order": 24,
"description": "arb max resolution | arb \u6700\u5927\u5206\u8fa8\u7387"
},
"min_bucket_reso": {
"type": "integer",
"title": "Min Bucket Reso",
"default": 256,
"minimum": 1,
"x-order": 23,
"description": "arb min resolution | arb \u6700\u5c0f\u5206\u8fa8\u7387"
},
"network_weights": {
"type": "string",
"title": "Network Weights",
"format": "uri",
"x-order": 2,
"description": "pretrained weights for LoRA network | \u82e5\u9700\u8981\u4ece\u5df2\u6709\u7684 LoRA \u6a21\u578b\u4e0a\u7ee7\u7eed\u8bad\u7ec3\uff0c\u8bf7\u4e0a\u4f20\u6587\u4ef6"
},
"text_encoder_lr": {
"type": "number",
"title": "Text Encoder Lr",
"default": 7e-06,
"minimum": 0,
"x-order": 19,
"description": "Text Encoder learning rate | Text Encoder \u5b66\u4e60\u7387"
},
"train_unet_only": {
"type": "boolean",
"title": "Train Unet Only",
"default": false,
"x-order": 12,
"description": "train U-Net only | \u4ec5\u8bad\u7ec3 U-Net\uff0c\u5f00\u542f\u8fd9\u4e2a\u4f1a\u727a\u7272\u6548\u679c\u5927\u5e45\u51cf\u5c11\u663e\u5b58\u4f7f\u7528\u30026G\u663e\u5b58\u53ef\u4ee5\u5f00\u542f"
},
"training_comment": {
"type": "string",
"title": "Training Comment",
"default": "this LoRA model credit from replicate-sd-scripts",
"x-order": 3,
"description": "training_comment | \u8bad\u7ec3\u4ecb\u7ecd\uff0c\u53ef\u4ee5\u5199\u4f5c\u8005\u540d\u6216\u8005\u4f7f\u7528\u89e6\u53d1\u5173\u952e\u8bcd"
},
"max_train_epoches": {
"type": "integer",
"title": "Max Train Epoches",
"default": 10,
"minimum": 1,
"x-order": 8,
"description": "max train epoches | \u6700\u5927\u8bad\u7ec3 epoch"
},
"save_every_n_epochs": {
"type": "integer",
"title": "Save Every N Epochs",
"default": 2,
"minimum": 1,
"x-order": 9,
"description": "save every n epochs | \u6bcf N \u4e2a epoch \u4fdd\u5b58\u4e00\u6b21"
},
"lr_scheduler_num_cycles": {
"type": "integer",
"title": "Lr Scheduler Num Cycles",
"default": 1,
"minimum": 1,
"x-order": 22,
"description": "cosine_with_restarts restart cycles | \u4f59\u5f26\u9000\u706b\u91cd\u542f\u6b21\u6570\uff0c\u4ec5\u5728 lr_scheduler \u4e3a cosine_with_restarts \u65f6\u8d77\u6548"
},
"train_text_encoder_only": {
"type": "boolean",
"title": "Train Text Encoder Only",
"default": false,
"x-order": 13,
"description": "train Text Encoder only | \u4ec5\u8bad\u7ec3 \u6587\u672c\u7f16\u7801\u5668"
},
"pretrained_model_name_or_path": {
"type": "string",
"title": "Pretrained Model Name Or Path",
"default": "CompVis/stable-diffusion-v1-4",
"x-order": 0,
"description": "base model name or path | \u5e95\u6a21\u540d\u79f0\u6216\u8def\u5f84"
},
"persistent_data_loader_workers": {
"type": "boolean",
"title": "Persistent Data Loader Workers",
"default": true,
"x-order": 25,
"description": "makes workers persistent, further reduces/eliminates the lag in between epochs. however it may increase memory usage | \u8dd1\u7684\u66f4\u5feb\uff0c\u5403\u5185\u5b58\u3002\u5927\u6982\u80fd\u63d0\u901f2.5\u500d\uff0c\u5bb9\u6613\u7206\u5185\u5b58\uff0c\u4fdd\u7559\u52a0\u8f7d\u8bad\u7ec3\u96c6\u7684worker\uff0c\u51cf\u5c11\u6bcf\u4e2a epoch \u4e4b\u95f4\u7684\u505c\u987f"
}
}
}
Output schema
The shape of the response you’ll get when you run this model with an API.
{
"type": "string",
"title": "Output",
"format": "uri"
}