You're looking at a specific version of this model. Jump to the model overview.
cjwbw /blipdiffusion-controlnet:0072c227
Input schema
The fields you can use to run this model with an API. If you don’t give a value for a field its default value will be used.
Field | Type | Default value | Description |
---|---|---|---|
prompt |
string
|
on a marble table
|
The prompt to guide the image generation.
|
negative_prompt |
string
|
over-exposure, under-exposure, saturated, duplicate, out of frame, lowres, cropped, worst quality, low quality, jpeg artifacts, morbid, mutilated, ugly, bad anatomy, bad proportions, deformed, blurry
|
The prompt or prompts not to guide the image generation.
|
style_image |
string
|
The reference style image to condition the generation on.
|
|
condtioning_image |
string
|
The conditioning canny edge image to condition the generation on.
|
|
controlnet_type |
string
(enum)
|
canny
Options: canny, hed |
Choose a control net
|
style_subject_category |
string
|
flower
|
The source subject category (subject that defines the style).
|
target_subject_category |
string
|
teapot
|
The target subject category (subject to geenrate).
|
num_inference_steps |
integer
|
25
Min: 1 Max: 500 |
The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference.
|
guidance_scale |
number
|
7.5
Min: 1 Max: 20 |
Scale for classifier-free guidance.
|
seed |
integer
|
Random seed. Leave blank to randomize the seed
|
Output schema
The shape of the response you’ll get when you run this model with an API.
Schema
{'format': 'uri', 'title': 'Output', 'type': 'string'}