zeke / sd3-inpainting-with-differential-diffusion

Stable Diffusion 3 with Differential Diffusion inpainting (experimental)

  • Public
  • 264 runs
  • L40S
  • GitHub

Input

string
Shift + Return to add a new line

Input prompt

Default: ""

string
Shift + Return to add a new line

Input negative prompt

Default: ""

file
Preview
image

Input image for img2img mode

file
Preview
mask

Mask for inpainting. White pixels will be inpainted and black pixels will be preserved. Gray pixels will be partially inpainted. If using a `mask` input, you must also provide an `image` input. A `prompt_strength` setting of >0.8 usually works well. Note that Stable Diffusion 3 was not trained for inpainting, so your mileage may vary.

string

Aspect ratio for the generated image

Default: "1:1"

integer
(minimum: 1, maximum: 4)

Number of images to output.

Default: 1

number
(minimum: 0, maximum: 50)

Scale for classifier-free guidance

Default: 7

number
(minimum: 0, maximum: 1)

Prompt strength when using img2img. 1.0 corresponds to full destruction of information in image

Default: 0.6

integer

Random seed. Leave blank to randomize the seed

string

Format of the output images

Default: "webp"

integer
(minimum: 0, maximum: 100)

Quality when saving the output images, from 0 to 100. 100 is best quality, 0 is lowest quality. Not relevant for .png outputs

Default: 80

boolean

This model’s safety checker can’t be disabled when running on the website. Learn more about platform safety on Replicate.

Disable safety checker for generated images. This feature is only available through the API. See [https://replicate.com/docs/how-does-replicate-work#safety](https://replicate.com/docs/how-does-replicate-work#safety)

Default: false

Output

output
Generated in

This example was created by a different version, zeke/sd3-inpainting-with-differential-diffusion:42628859.

Run time and cost

This model costs approximately $0.11 to run on Replicate, or 9 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia L40S GPU hardware. Predictions typically complete within 113 seconds. The predict time for this model varies significantly based on the inputs.

Readme

Differential Diffusion is an inpainting technique that uses the grayscale levels of a mask image to determine the intensity of inpainting, allowing for selective modifications at different image regions.

⚠️ This is an experimental model, and a work in progress. See https://github.com/replicate/cog-stable-diffusion-3/pull/4

This is for non-commercial use only and is intended for model exploration. If you want to use this model commercially, see Stability AI Self-Hosted License or use Replicate’s official version of SD3 at replicate.com/stability-ai/stable-diffusion-3.