mikelyndon / controlnet-multi

A multi-input ControlNet model. Pass in control images and set the weights. (Updated 2 years ago)

  • Public
  • 253 runs
  • T4
  • GitHub
Iterate in playground

Input

string
Shift + Return to add a new line

Input prompt

Default: "a magpie wearing a tophat"

string
Shift + Return to add a new line

Specify things to not see in the output

Default: "monochrome, lowres, bad anatomy, worst quality, low quality"

integer

Random seed. Leave blank to randomize the seed

*file
Preview
canny

Canny input image

*file
Preview
pose

Pose input image

*file
Preview
depth

Depth input image

*file
Preview
mlsd

MLSD input image

*file
Preview
seg

SEG input image

*file
Preview
softedge

Soft Edge (hed) input image

number
(minimum: 0, maximum: 1)

A value between 0 and 1 to emphasize the conditioning by the canny edges.

Default: 1

number
(minimum: 0, maximum: 1)

A value between 0 and 1 to emphasize the conditioning by the open pose.

Default: 0

number
(minimum: 0, maximum: 1)

A value between 0 and 1 to emphasize the conditioning by the depth.

Default: 0

number
(minimum: 0, maximum: 1)

A value between 0 and 1 to emphasize the conditioning by the mlsd.

Default: 0

number
(minimum: 0, maximum: 1)

A value between 0 and 1 to emphasize the conditioning by the seg.

Default: 0

number
(minimum: 0, maximum: 1)

A value between 0 and 1 to emphasize the conditioning by the soft edge (hed).

Default: 0

Output

output
Generated in

Run time and cost

This model costs approximately $0.19 to run on Replicate, or 5 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia T4 GPU hardware. Predictions typically complete within 15 minutes. The predict time for this model varies significantly based on the inputs.