You're looking at a specific version of this model. Jump to the model overview.

afiaka87 /clip-guided-diffusion:a9650e4b

Input

*string
Shift + Return to add a new line

Text prompt to use.

file

an image to blend with diffusion before clip guidance begins. Uses half as many timesteps.

string

Number of timesteps. Fewer is faster, but less accurate.

Default: "250"

integer
(minimum: 0, maximum: 2500)

Scale for CLIP spherical distance loss. Values will need tinkering for different settings.

Default: 1000

number
(minimum: 0, maximum: 250)

Scale for a denoising loss that effects the last half of the diffusion process. 0, 100, 150 and 200

Default: 50

number
(minimum: 0, maximum: 250)

Controls how far out of RGB range values may get.

Default: 50

number
(minimum: 0, maximum: 128)

Controls how much saturation is allowed. Use for ddim. From @nshepperd.

Default: 0

boolean

Whether to use augmentation during prediction. May help with ddim and respacing <= 100.

Default: false

boolean

Use the magnitude of the loss. May help (only) with ddim and respacing <= 100

Default: false

integer

Seed for reproducibility

Default: 0

Output

file

This example was created by a different version, afiaka87/clip-guided-diffusion:e654097b.