pnyompen / sd-lineart-controlnet

This AI model generates new images while preserving the original image characteristics by applying Lineart ControlNet to the input image and performing Stable Diffusion's Image2Image generation. Combined with IP-Adapter, it enables generation that better

  • Public
  • 39 runs
Iterate in playground

Run pnyompen/sd-lineart-controlnet with an API

Use one of our client libraries to get started quickly. Clicking on a library will take you to the Playground tab where you can tweak different inputs, see the results, and copy the corresponding code to use in your own project.

Input schema

The fields you can use to run this model with an API. If you don't give a value for a field its default value will be used.

Field Type Default value Description
prompt
string
An astronaut riding a rainbow unicorn
Input prompt
image
string
Input image for img2img or inpaint mode
condition_scale
number
1.1

Max: 2

The bigger this number is, the more ControlNet interferes
strength
number
0.8

Max: 1

When img2img is active, the denoising strength. 1 means total destruction of the input image.
ip_adapter_scale
number
1
Scale for the IP Adapter
negative_prompt
string
Input Negative Prompt
num_inference_steps
integer
30

Min: 1

Max: 500

Number of denoising steps
num_outputs
integer
1

Min: 1

Max: 4

Number of images to output
scheduler
string (enum)
K_EULER

Options:

DDIM, DPMSolverMultistep, HeunDiscrete, KarrasDPM, K_EULER_ANCESTRAL, K_EULER, PNDM

scheduler
guidance_scale
number
7.5

Min: 1

Max: 50

Scale for classifier-free guidance
seed
integer
Random seed. Leave blank to randomize the seed
color
array
[0, 0, 0, 0]
RGBA color

Output schema

The shape of the response you’ll get when you run this model with an API.

Schema
{
  "type": "array",
  "items": {
    "type": "string",
    "format": "uri"
  },
  "title": "Output"
}