cjwbw / kandinsky-2-2-controlnet-depth

Kandinsky Image Generation with ControlNet Conditioning

  • Public
  • 3.7K runs
  • A100 (80GB)
  • GitHub
  • License

Input

image
file

Input image

string
Shift + Return to add a new line

Input prompt

Default: "A robot, 4k photo"

string
Shift + Return to add a new line

Specify things to not see in the output

Default: "lowres, text, error, cropped, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, out of frame, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck, username, watermark, signature"

string

Choose a task

Default: "img2img"

integer

Width of output image. Lower the setting if hits memory limits.

Default: 768

integer

Height of output image. Lower the setting if hits memory limits.

Default: 768

integer
(minimum: 1, maximum: 500)

Number of denoising steps

Default: 75

integer
(minimum: 1, maximum: 4)

Number of images to output.

Default: 1

integer

Random seed. Leave blank to randomize the seed

Output

output
Generated in

Run time and cost

This model costs approximately $0.011 to run on Replicate, or 90 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia A100 (80GB) GPU hardware. Predictions typically complete within 9 seconds. The predict time for this model varies significantly based on the inputs.