Edit images
These models allow editing and manipulating images in various ways. Key capabilities include:
- Inpainting & object removal - Remove objects or fill in missing image regions. Useful for removing unwanted elements from photos.
- Background removal - Delete and replace image backgrounds. Isolate subjects onto transparent or alternate backgrounds.
- Guided editing - Make targeted edits to images by adding guiding information like depth maps, sketches, edge detection, human poses, or text prompts. Allows fine-grained creative control.
Note: For face restoration and photo colorization, see our Upscale Images collection.
Our Picks
Best Inpainting Model: logerzhu/ad-inpaint
For removing objects and filling in missing regions of an image, we recommend starting with logerzhu/ad-inpaint. It delivers great results and runs faster than alternatives like instruct-pix2pix.
Best Background Removal: pollinations/modnet
To isolate a subject and replace the background of an image, pollinations/modnet is the clear choice. It’s optimized for this specific task and widely used.
Best Guided Editing: alaradirik/t2i-adapter-sdxl-depth-midas and other t2i-adapter models
For the most control over editing an image, try the alaradirik/t2i-adapter-sdxl-depth-midas model. It lets you modify an image using an automatically generated depth map.
The other models in the t2i-adapter-sdxl-* series are also great for guided editing using different types of inputs:
- alaradirik/t2i-adapter-sdxl-lineart - Edit using line art
- alaradirik/t2i-adapter-sdxl-canny - Edit using edge detection
- alaradirik/t2i-adapter-sdxl-sketch - Edit using sketches
- alaradirik/t2i-adapter-sdxl-openpose - Edit using human pose detection
Promising New Approaches
For even more advanced guided editing, two newer models are worth exploring:
adirik/masactrl-sdxl allows you to edit specific regions of an image using text prompts. It provides fine-grained control for local edits.
adirik/stylemc is a unique model that generates new images in the style of an input image based on a text prompt. While not a traditional editing model, it enables creative exploration.
Recommended models
rossjillian / controlnet
Control diffusion models
cjwbw / rembg
Remove images background
andreasjansson / stable-diffusion-inpainting
Inpainting using RunwayML's stable-diffusion-inpainting checkpoint
orpatashnik / styleclip
Text-Driven Manipulation of StyleGAN Imagery
timothybrooks / instruct-pix2pix
Edit images with human instructions
pollinations / modnet
A deep learning approach to remove background & adding new background image
logerzhu / ad-inpaint
Product advertising image generator
arielreplicate / deoldify_image
Add colours to old images
adirik / t2i-adapter-sdxl-depth-midas
Modify images using depth maps
adirik / t2i-adapter-sdxl-openpose
Modify images using human pose
ideogram-ai / ideogram-v2-turbo
A fast image model with state of the art inpainting, prompt comprehension and text rendering.
adirik / t2i-adapter-sdxl-lineart
Modify images using line art
ideogram-ai / ideogram-v2
An excellent image model with state of the art inpainting, prompt comprehension and text rendering
storymy / take-off-eyeglasses
Remove eyeglasses and shadows from photo
adirik / t2i-adapter-sdxl-canny
Modify images using canny edges
sujaykhandekar / object-removal
Removes specified objects from image
adirik / t2i-adapter-sdxl-sketch
Modify images using sketches
lambdal / image-mixer
Image Mixer Stable Diffusion
daanelson / plug_and_play_image_translation
Edit an image using features from diffusion models
cjwbw / pix2pix-zero
Zero-shot Image-to-Image Translation
adirik / kosmos-g
Kosmos-G: Generating Images in Context with Multimodal Large Language Models
cjwbw / repaint
Inpainting using Denoising Diffusion Probabilistic Models
adirik / masactrl-sdxl
Editable image generation with MasaCtrl-SDXL
adirik / stylemc
Text-guided image generation and editing
adirik / masactrl-stable-diffusion-v1-4
Edit real or generated images