You're looking at a specific version of this model. Jump to the model overview.

lightweight-ai /q_l_t:9aff86c5

Input schema

The fields you can use to run this model with an API. If you don’t give a value for a field its default value will be used.

Field Type Default value Description
seed
integer
42
Random seed
steps
integer
1000

Min: 1

Total training steps (optimizer updates)
lora_r
integer
16

Min: 1

LoRA rank r
data_zip
string
ZIP of training data. Each image must have a caption file with the same base name and a .txt extension (e.g., cat.jpg + cat.txt). Files can be in subfolders.
base_model
string
Qwen/Qwen-Image
Base VLM to fine-tune (HuggingFace ID). Use a Qwen-VL/Qwen2-VL instruct checkpoint, e.g. 'Qwen/Qwen2-VL-2B-Instruct'.
batch_size
integer
1

Min: 1

Per-device batch size
lora_alpha
integer
32

Min: 1

LoRA alpha
resolution
integer
672

Min: 64

Short-side image resolution for training (if your job type uses it)
save_every
integer
500

Min: 1

Save checkpoint every N steps
resume_from
string
Optional path or HF repo to resume from a previous LoRA checkpoint
lora_dropout
number
0.05

Max: 1

LoRA dropout
learning_rate
number
0.00005
Learning rate
gradient_accumulation_steps
integer
4

Min: 1

Gradient accumulation to emulate larger batch sizes

Output schema

The shape of the response you’ll get when you run this model with an API.

Schema
{'format': 'uri', 'title': 'Output', 'type': 'string'}