adirik / inst-inpaint

Inst-Inpaint: Instructing to Remove Objects with Diffusion Models

  • Public
  • 444 runs
  • GitHub
  • Paper
  • License

Run time and cost

This model costs approximately $0.025 to run on Replicate, or 40 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia A40 GPU hardware. Predictions typically complete within 43 seconds. The predict time for this model varies significantly based on the inputs.

Readme

Inst-Inpaint

Inst-Inpaint is a diffusion-based model that performs text-guided object removal from images. This model is a Cog wrapper implemented using the original repository.

Using the Model

To use the model, simply upload the image you want to edit and enter a text instruction to remove target object/s (e.g. “remove the can on the background”, “remove the kite”.)

Note that Inst-Inpaint is trained on and outputs 256 x 256 images, optionally chain Inst-Inpaint with super-resolution models to upscale generated images to higher resolution.

References

@misc{yildirim2023instinpaint,
      title={Inst-Inpaint: Instructing to Remove Objects with Diffusion Models}, 
      author={Ahmet Burak Yildirim and Vedat Baday and Erkut Erdem and Aykut Erdem and Aysegul Dundar},
      year={2023},
      eprint={2304.03246},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}