You're looking at a specific version of this model. Jump to the model overview.
Input schema
The fields you can use to run this model with an API. If you don’t give a value for a field its default value will be used.
Field | Type | Default value | Description |
---|---|---|---|
prompt |
string
|
a photo of an astronaut riding a horse on mars
|
Input prompt. For multi-modal to image generation with one or more input images, the placeholder in the prompt should be in the format of <img><|image_*|></img> (for the first image, the placeholder is <|image_1|>, for the second image, the the placeholder is <|image_2|>). Refer to examples for more details
|
img1 |
string
|
Input image 1. Optional
|
|
img2 |
string
|
Input image 2. Optional
|
|
img3 |
string
|
Input image 3. Optional
|
|
width |
integer
|
1024
Min: 128 Max: 2048 |
Width of the output image
|
height |
integer
|
1024
Min: 128 Max: 2048 |
Height of the output image
|
inference_steps |
integer
|
50
Min: 1 Max: 100 |
Number of denoising steps
|
guidance_scale |
number
|
2.5
Min: 1 Max: 5 |
Classifier-free guidance scale for text prompt
|
img_guidance_scale |
number
|
1.6
Min: 1 Max: 2 |
Classifier-free guidance scale for images
|
seed |
integer
|
Random seed. Leave blank to randomize the seed
|
|
max_input_image_size |
integer
|
1024
Min: 128 Max: 2048 |
maximum input image size
|
separate_cfg_infer |
boolean
|
True
|
Whether to use separate inference process for different guidance. This will reduce the memory cost.
|
offload_model |
boolean
|
False
|
Offload model to CPU, which will significantly reduce the memory cost but slow down the generation speed. You can cancel separate_cfg_infer and set offload_model=True. If both separate_cfg_infer and offload_model are True, further reduce the memory, but slowest generation
|
use_input_image_size_as_output |
boolean
|
False
|
Automatically adjust the output image size to be same as input image size. For editing and controlnet task, it can make sure the output image has the same size as input image leading to better performance
|
Output schema
The shape of the response you’ll get when you run this model with an API.
Schema
{'format': 'uri', 'title': 'Output', 'type': 'string'}