s-henmind/xz-depth-pose

Public
11 runs

Run s-henmind/xz-depth-pose with an API

Use one of our client libraries to get started quickly. Clicking on a library will take you to the Playground tab where you can tweak different inputs, see the results, and copy the corresponding code to use in your own project.

Input schema

The fields you can use to run this model with an API. If you don't give a value for a field its default value will be used.

Field Type Default value Description
prompt
string
Prompt for generated image
conditioning_scale
number
0.5

Max: 1

ControlNet strength, depth works best at 0.2, canny works best at 0.4. Recommended range is 0.3-0.8
image
string
The image to restyle
strength
number
0.8

Max: 1

Img2Img strength
guidance_scale
number
3.5

Max: 30

Guidance scale
enable_hyper_flux_8_step
boolean
False
Whether to use Hyper-FLUX.1-dev-8steps or not. If False, make sure to increase your number of inference steps
num_inference_steps
integer
8

Min: 1

Max: 38

Number of inference steps
seed
integer
Random seed. Set for reproducible generation
output_format
None
jpg
Format of the output images
output_quality
integer
100

Max: 100

Quality when saving the output images, from 0 to 100. 100 is best quality, 0 is lowest quality. Not relevant for .png outputs
lora_weights
string
Huggingface path, or URL to the LoRA weights. Ex: alvdansen/frosting_lane_flux
lora_scale
number
0.8

Max: 2

Scale for the LoRA weights
id_image
string
id image
id_image_1
string
id image
id_image_2
string
id image
id_weight
number

Max: 1

id weight
num_outputs
integer
1

Min: 1

Max: 4

Number of images to output.
dilate_pixels
integer
10

Max: 30

dilate pixels
repaint_steps
integer
4

Max: 10

Number of repaint steps

Output schema

The shape of the response you’ll get when you run this model with an API.

Schema
{
  "type": "array",
  "items": {
    "type": "string",
    "format": "uri"
  },
  "title": "Output"
}