xlabs-ai / flux-dev-controlnet

XLabs v3 canny, depth and soft edge controlnets for Flux.1 Dev

  • Public
  • 162.3K runs
  • A100 (80GB)
  • GitHub
  • Weights
  • License

Input

control_image
string
Shift + Return to add a new line

Default: ""

string
Shift + Return to add a new line

Things you do not want to see in your image

Default: ""

number
(minimum: 0, maximum: 5)

Guidance scale

Default: 3.5

integer
(minimum: 1, maximum: 50)

Number of steps

Default: 28

string

Type of control net

Default: "depth"

number
(minimum: 0, maximum: 3)

Strength of control net. Different controls work better with different strengths. Canny works best with 0.5, soft edge works best with 0.4, and depth works best between 0.5 and 0.75. If images are low quality, try reducing the strength and try reducing the guidance scale.

Default: 0.5

*file

Image to use with control net

number
(minimum: 0, maximum: 1)

Strength of image to image control. 0 means none of the control image is used. 1 means the control image is returned used as is. Try values between 0 and 0.25 for best results.

Default: 0

string

Preprocessor to use with depth control net

Default: "DepthAnything"

string

Preprocessor to use with soft edge control net

Default: "HED"

string
Shift + Return to add a new line

Optional LoRA model to use. Give a URL to a HuggingFace .safetensors file, a Replicate .tar file or a CivitAI download link.

Default: ""

number
(minimum: -1, maximum: 3)

Strength of LoRA model

Default: 1

boolean

Return the preprocessed image used to control the generation process. Useful for debugging.

Default: false

string

Format of the output images

Default: "webp"

integer
(minimum: 0, maximum: 100)

Quality of the output images, from 0 to 100. 100 is best quality, 0 is lowest quality.

Default: 80

integer

Set a seed for reproducibility. Random by default.

Output

output
Generated in

This example was created by a different version, xlabs-ai/flux-dev-controlnet:56ac7b66.

Run time and cost

This model costs approximately $0.11 to run on Replicate, or 9 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia A100 (80GB) GPU hardware. Predictions typically complete within 83 seconds. The predict time for this model varies significantly based on the inputs.

Readme

This model doesn't have a readme.