zsxkib / flux-dev-inpainting

🎨 Fill in masked parts of images with FLUX.1-dev πŸ–ŒοΈ

  • Public
  • 347.3K runs
  • A100 (80GB)
  • GitHub
  • License

Input

*file
Preview
image

Input image for inpainting

*file
Preview
mask

Mask image

*string
Shift + Return to add a new line

Text prompt for inpainting

number
(minimum: 0, maximum: 1)

Strength of inpainting. Higher values allow for more deviation from the original image.

Default: 0.85

integer
(minimum: 1, maximum: 50)

Number of denoising steps. More steps usually lead to a higher quality image at the expense of slower inference.

Default: 30

number
(minimum: 1, maximum: 20)

Guidance scale as defined in Classifier-Free Diffusion Guidance. Higher guidance scale encourages images that are closely linked to the text prompt, usually at the expense of lower image quality.

Default: 7

integer
(minimum: 128, maximum: 2048)

Height of the output image. Will be rounded to the nearest multiple of 8.

Default: 1024

integer
(minimum: 128, maximum: 2048)

Width of the output image. Will be rounded to the nearest multiple of 8.

Default: 1024

integer
(minimum: 1, maximum: 8)

Number of images to generate per prompt.

Default: 1

integer

Random seed. Leave blank to randomize the seed

string

Format of the output image

Default: "webp"

integer
(minimum: 0, maximum: 100)

Quality of the output image, from 0 to 100. 100 is best quality, 0 is lowest quality.

Default: 80

Output

output
Generated in

This example was created by a different version, zsxkib/flux-dev-inpainting:11cca327.

Run time and cost

This model costs approximately $0.040 to run on Replicate, or 25 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia A100 (80GB) GPU hardware. Predictions typically complete within 29 seconds. The predict time for this model varies significantly based on the inputs.

Readme

FLUX.1 DEV Inpainting Model

About

This is a version of the Flux DEV inpainting model by @skalskip92. We’ve changed it to keep the original image’s shape. The model expects the mask to be the same size as the input image, but you can change this with some settings.

Big thanks to @Gothos13 for helping create this clever inpainting method.

Tips for Use

  • For better results, try using more steps (between 20-30) when creating the image.
  • Play around with the inpainting β€˜strength’. Numbers between 0.85-1.0 often work well, but you might need different strengths for different prompts.
  • Keep in mind that Flux wasn’t specially trained for inpainting. There are no specific inpainting parts, which makes this a smart way to use the model’s skills.
  • The model can still handle text in images.

Note

This inpainting method can make great images, but you might need to try a few times to get what you want. Don’t give up if your first try doesn’t look perfect!

For more info and updates, check out the original tweet thread.

Support

If you like my work, please follow me! @zsakib_