You're looking at a specific version of this model. Jump to the model overview.

usamaehsan /qwen-image-4bit:5190bd4f

Input schema

The fields you can use to run this model with an API. If you don’t give a value for a field its default value will be used.

Field Type Default value Description
image
array
Input image(s). Provide a single image for editing, or multiple images (2-4) for composition tasks
prompt
string
Remove the background
Text prompt describing the edit or composition to perform
negative_prompt
string
Negative prompt (things to avoid in the output)
width
integer
0

Max: 2048

Width of output image. Set to 0 to use input image width
height
integer
0

Max: 2048

Height of output image. Set to 0 to use input image height
num_inference_steps
integer
20

Min: 1

Max: 50

Number of denoising steps. More steps = higher quality but slower. Use 40+ for compositions
guidance_scale
number
1

Max: 20

Guidance scale (for multi-image composition)
true_cfg_scale
number
2

Min: 1

Max: 20

True guidance scale for the edit. Higher values = stronger adherence to prompt. Use 1.0 to disable CFG. Recommended: 2.0-4.0 for edits
num_images_per_prompt
integer
1

Min: 1

Max: 4

Number of images to generate per prompt
lora_mode
None
none
Lightning LoRA mode for faster inference. 'none' = standard quality (20-40 steps), 'lightning-4steps' = ultra-fast (4 steps), 'lightning-8steps' = fast (8 steps)
seed
integer
Random seed. Leave blank to randomize

Output schema

The shape of the response you’ll get when you run this model with an API.

Schema
{'format': 'uri', 'title': 'Output', 'type': 'string'}