neurowelt / keros-diffusion

Controlling SD XL diffusion inference

  • Public
  • 3 runs

Run time and cost

This model runs on Nvidia A40 GPU hardware. We don't yet have enough runs of this model to provide performance information.

Readme

Our aim at Keros AI was to give users more control over the effect of Stable Diffusion XL inference. We achieved that by manipulating the noise at scheduler step level.

To increase the control over the results of diffusion inference we introduce the following parameters:

  • Richness (1.0-3.0): Low richness gives not much detail or even objects and misty look. High values make images stronger, rich, complex, but bugs might also happen.

  • Contrast (0.9-1.1): Quite delicate, can correct overexposure that guidance or richness create. Use 1.0 for no change.

  • Texture (0.7-1.3): Low values give high local contrast and sharper changes between objects better for illustrations, glitchcore etc. High values give smooth textures, better for photos.

  • Background (0.25-0.3): For historical reasons, 0.3 is a starting point equivalent to normal SDXL. 0.25 will (depending on prompt) cancel background objects if they shouldn’t be there, allows for single color images or high contrast pure white and black.

  • Focus (0.0-0.5): 0.5 will have crispy sharp and contrasty images. 0.25 will be misty and delicate. Scaling is still non-linear, so 0.5 and 0.49 will have high difference compared 0.1 and 0.01 smaller. It’s best to keep it as is.

  • Variance (-0.1-0.1): Hardest to control as it changes how other parameters operate. Low value of -0.1 is great for txt2img as it helps to get more interesting results and should be used with high Richness (param1 of 1.0 up to ~2.5). 0.1 is great for img2img as it keeps structure of image but allows large changes in texture and style, to be used with Richness of 0.4 up to 1.0.