adirik/t2i-adapter-sdxl-depth-midas

Modify images using depth maps

  • Public
  • 55.3K runs

Run t2i-adapter-sdxl-depth-midas with an API

Use one of our client libraries to get started quickly. Clicking on a library will take you to the Playground tab where you can tweak different inputs, see the results, and copy the corresponding code to use in your own project.

Input schema

The fields you can use to run this model with an API. If you don't give a value for a field its default value will be used.

Field Type Default value Description
image
string
Input image
prompt
string
A photo of a room, 4k photo, highly detailed
Input prompt
negative_prompt
string
anime, cartoon, graphic, text, painting, crayon, graphite, abstract, glitch, deformed, mutated, ugly, disfigured
Specify things to not see in the output
num_inference_steps
integer
30

Max: 100

Number of diffusion steps
adapter_conditioning_scale
number
1

Max: 5

Conditioning scale
adapter_conditioning_factor
number
1

Max: 1

Factor to scale image by
guidance_scale
number
7.5

Max: 10

Guidance scale to match the prompt
num_samples
integer
1

Min: 1

Max: 4

Number of outputs to generate
scheduler
string (enum)
K_EULER_ANCESTRAL

Options:

DDIM, DPMSolverMultistep, HeunDiscrete, KarrasDPM, K_EULER_ANCESTRAL, K_EULER, PNDM, LMSDiscrete

Which scheduler to use
random_seed
integer
Random seed for reproducibility, leave blank to randomize output

Output schema

The shape of the response you’ll get when you run this model with an API.

Schema
{'items': {'format': 'uri', 'type': 'string'},
 'title': 'Output',
 'type': 'array'}