andyx1976/qwenunif

Public
8 runs

Run andyx1976/qwenunif with an API

Use one of our client libraries to get started quickly. Clicking on a library will take you to the Playground tab where you can tweak different inputs, see the results, and copy the corresponding code to use in your own project.

Input schema

The fields you can use to run this model with an API. If you don't give a value for a field its default value will be used.

Field Type Default value Description
prompt
string
Prompt for generated image
enhance_prompt
boolean
False
Append a high-detail suffix to the prompt.
lora_weights
string
Load LoRA weights. Supports local .safetensors paths or ZIPs produced by cog train.
replicate_weights
string
LoRA ZIP generated by cog train (alternate to lora_weights).
lora_scale
number
1
Determines how strongly the loaded LoRA should be applied.
image
string
Optional guide image for img2img.
strength
number
0.9

Max: 1

Strength for img2img pipeline
negative_prompt
string
Negative prompt for generated image
aspect_ratio
None
16:9
Aspect ratio for the generated image
image_size
None
optimize_for_quality
Image size preset (quality = larger, speed = faster).
go_fast
boolean
True
Run faster predictions with aggressive caching.
num_inference_steps
integer
30

Min: 1

Max: 50

Number of denoising steps (1–50).
guidance
number
3

Max: 10

Guidance for generated image (0-10).
seed
integer
Random seed. Leave blank for random.
output_format
None
webp
Format of the output images
output_quality
integer
80

Max: 100

Quality when saving lossy images (0-100).
disable_safety_checker
boolean
False
Disable safety checker (not used).

Output schema

The shape of the response you’ll get when you run this model with an API.

Schema
{
  "type": "array",
  "items": {
    "type": "string",
    "format": "uri"
  },
  "title": "Output"
}