pnyompen / dreamshaper-controlnet

Dreamshaper canny controlnet

  • Public
  • 230 runs
  • T4
Iterate in playground

Input

image
string
Shift + Return to add a new line

Input prompt

Default: "An astronaut riding a rainbow unicorn"

file

Input image for img2img or inpaint mode

boolean

Use img2img pipeline, it will use the image input both as the control image and the base image.

Default: false

boolean

Use BLIP to generate captions for the input images

Default: false

number
(minimum: 0)

Weight for the generated caption

Default: 0.5

number
(minimum: 0, maximum: 2)

The bigger this number is, the more ControlNet interferes

Default: 1.1

number
(minimum: 0, maximum: 1)

When img2img is active, the denoising strength. 1 means total destruction of the input image.

Default: 0.8

number
(minimum: 0, maximum: 1)

Scale for the IP Adapter

Default: 1

string
Shift + Return to add a new line

Input Negative Prompt

Default: ""

integer
(minimum: 1, maximum: 500)

Number of denoising steps

Default: 30

integer
(minimum: 1, maximum: 4)

Number of images to output

Default: 1

string

scheduler

Default: "K_EULER"

number
(minimum: 1, maximum: 50)

Scale for classifier-free guidance

Default: 7.5

integer

Random seed. Leave blank to randomize the seed

number
(minimum: 0, maximum: 3)

LoRA additive scale. Only applicable on trained models.

Default: 0.95

string
Shift + Return to add a new line

Replicate LoRA weights to use. Leave blank to use the default weights.

boolean

Remove background from the input image

Default: false

boolean

Remove eye from the canny image

Default: true

Output

outputoutput
Generated in

This output was created using a different version of the model, pnyompen/dreamshaper-controlnet:dd8762dc.

Run time and cost

This model runs on Nvidia T4 GPU hardware. We don't yet have enough runs of this model to provide performance information.

Readme

This model doesn't have a readme.