edenartlab / sdxl-pipelines

  • Public
  • 560.7K runs
  • A100 (80GB)

Input

string

Mode

Default: "create"

boolean

yield individual results if True

Default: false

integer
(minimum: 1, maximum: 25)

for mode create, how many steps per update to stream (steam must be set to True)

Default: 1

integer
(minimum: 512, maximum: 2048)

Width

Default: 1024

integer
(minimum: 512, maximum: 2048)

Height

Default: 1024

string

Which Stable Diffusion checkpoint to use

Default: "juggernaut_XL2"

string
Shift + Return to add a new line

(optional) URL of Lora finetuning

number
(minimum: 0, maximum: 1.5)

Lora scale (how much of the Lora finetuning to apply)

Default: 0.7

string

Which sampler to use

Default: "euler"

integer
(minimum: 10, maximum: 70)

Diffusion steps

Default: 35

number
(minimum: 0, maximum: 20)

Strength of text conditioning guidance

Default: 7.5

number
(minimum: 1, maximum: 2)

Upscaling resolution

Default: 1

string
Shift + Return to add a new line

Load initial image from file, url, or base64 string

number
(minimum: 0, maximum: 1)

Strength of initial image

Default: 0

boolean

Adopt aspect ratio from init image

Default: true

string

Controlnet type

Default: "off"

string
Shift + Return to add a new line

image for controlnet guidance

number
(minimum: 0, maximum: 1.5)

Strength of control image

Default: 0

string
Shift + Return to add a new line

Load ip_adapter image from file, url, or base64 string

number
(minimum: 0, maximum: 1.25)

Strength of image conditioning from ip_adapter (vs txt conditioning from clip-interrogator or prompt) (used in remix, upscale, blend and real2real)

Default: 0.65

string
Shift + Return to add a new line

Text input

string
Shift + Return to add a new line

Text inputs to interpolate, separated by |

string
Shift + Return to add a new line

Text input weights to interpolate, separated by |

string
Shift + Return to add a new line

Negative text input (mode==all)

Default: "nude, naked, text, watermark, low-quality, signature, padding, margins, white borders, padded border, moiré pattern, downsampling, aliasing, distorted, blurry, blur, jpeg artifacts, compression artifacts, poorly drawn, low-resolution, bad, grainy, error, bad-contrast"

integer
(minimum: 0, maximum: 10000000000)

random seed

Default: 13

integer
(minimum: 1, maximum: 4)

batch size

Default: 1

integer
(minimum: 3, maximum: 1000)

Total number of frames for video modes

Default: 40

string
Shift + Return to add a new line

Interpolation texts for video modes

string
Shift + Return to add a new line

Seeds for interpolated texts for video modes

string
Shift + Return to add a new line

Interpolation init images, file paths or urls for video modes

number
(minimum: 0.5, maximum: 5)

Power for interpolation_init_images prompts for video modes

Default: 2.5

number
(minimum: 0, maximum: 1)

Minimum init image strength for interpolation_init_images prompts for video modes

Default: 0.05

number
(minimum: 0, maximum: 1)

Maximum init image strength for interpolation_init_images prompts for video modes

Default: 0.95

string
Shift + Return to add a new line

An audio file to use for real2real_audio

boolean

Loops (mode==interpolate & real2real)

Default: true

boolean

Smooth (mode==interpolate & real2real)

Default: true

string
Shift + Return to add a new line

What fraction of the denoising trajectory to skip at the start and end of each interpolation phase, two floats, separated by a pipe (|)

Default: "0.05|0.6"

integer
(minimum: 3, maximum: 6)

Number of anchor frames to render (including keyframes) before activating latent blending

Default: 3

integer
(minimum: 0, maximum: 3)

Number of times to smooth final frames with FILM (default is 0) (mode==interpolate)

Default: 1

integer
(minimum: 1, maximum: 30)

Frames per second (mode==interpolate & real2real)

Default: 12

boolean

Smooth (mode==interpolate & real2real)

Default: false

Output

No output yet! Press "Submit" to start a prediction.

Run time and cost

This model costs approximately $0.017 to run on Replicate, or 58 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia A100 (80GB) GPU hardware. Predictions typically complete within 13 seconds. The predict time for this model varies significantly based on the inputs.

Readme

This model doesn't have a readme.