Collections

Generate images

These models generate images from text prompts. Many of these models are based on Stable Diffusion.

Read our guide to learn more about using Stable Diffusion.

  • Text-to-image - Convert text prompts to photorealistic images. Useful for quickly visualizing concepts
  • Control over style - Adjust image properties like lighting and texture via prompts
  • In-painting - Expand, edit, or refine images by filling in missing regions

Our Picks

Best overall image generation model: stability-ai/sdxl

The best overall image generation model is stability-ai/sdxl. It supports LoRA fine-tuning, which means you can customize the model to produce specific styles or subjects. For more information about how to fine-tune SDXL, read our detailed guide about fine-tuning Stable Diffusion

Best ComfyUI model: fofr/any-comfyui-workflow

If you’re a fan of ComfyUI, you can export any of your favorite ComfyUI workflows to JSON and run them on Replicate using the fofr/any-comfyui-workflow model. For more information, check out our detailed guide to using ComfyUI.

Best fast image generation model: lucataco/sdxl-lightning-4step

The best-looking fast image generation model is lucataco/sdxl-lightning-4step, it will spit out an image in 1.6 seconds. The fastest image generation model is fofr/latent-consistency-model which will generate an image in 0.6 seconds.

Best fine-tunes

Make sure to check out our SDXL fine-tunes collection, which includes all publicly available SDXL fine-tunes hosted on Replicate. This collection should help you get a feel for the sorts of things you can do with fine-tuning.

Recommended models

stability-ai/stable-diffusion

A latent text-to-image diffusion model capable of generating photo-realistic images given any text input

108M runs

bytedance/sdxl-lightning-4step

SDXL-Lightning by ByteDance: a fast text-to-image model that makes high-quality images in 4 steps

64.3M runs

stability-ai/sdxl

A text-to-image generative AI model that creates beautiful images

51.9M runs

stability-ai/stable-diffusion-inpainting

Fill in masked parts of images with Stable Diffusion

16.8M runs

ai-forever/kandinsky-2.2

multilingual text2image latent diffusion model

9.2M runs

ai-forever/kandinsky-2

text2img model trained on LAION HighRes and fine-tuned on internal datasets

6.1M runs

fofr/sdxl-emoji

An SDXL fine-tune based on Apple Emojis

4.4M runs

tstramer/material-diffusion

Stable diffusion fork for generating tileable outputs using v1.5 model

2.1M runs

lucataco/proteus-v0.2

Proteus v0.2 shows subtle yet significant improvements over Version 0.1. It demonstrates enhanced prompt understanding that surpasses MJ6, while also approaching its stylistic capabilities.

1.6M runs

fofr/latent-consistency-model

Super-fast, 0.6s per image. LCM with img2img, large batching and canny controlnet

929.5K runs

lucataco/ssd-1b

Segmind Stable Diffusion Model (SSD-1B) is a distilled 50% smaller version of SDXL, offering a 60% speedup while maintaining high-quality text-to-image generation capabilities

912.6K runs

playgroundai/playground-v2.5-1024px-aesthetic

Playground v2.5 is the state-of-the-art open-source model in aesthetic quality

444K runs

batouresearch/sdxl-controlnet-lora

'''Last update: Now supports img2img.''' SDXL Canny controlnet with LoRA support.

415.9K runs

fofr/realvisxl-v3-multi-controlnet-lora

RealVisXl V3 with multi-controlnet, lora loading, img2img, inpainting

301.3K runs

lucataco/realvisxl2-lcm

RealvisXL-v2.0 with LCM LoRA - requires fewer steps (4 to 8 instead of the original 40 to 50)

285.9K runs

fofr/sticker-maker

Make stickers with AI. Generates graphics with transparent backgrounds.

268.8K runs

playgroundai/playground-v2-1024px-aesthetic

Playground v2 is a diffusion-based text-to-image generative model trained from scratch by the research team at Playground

265.4K runs

lucataco/realvisxl-v2.0

Implementation of SDXL RealVisXL_V2.0

259.8K runs

fofr/any-comfyui-workflow

Run any ComfyUI workflow. Guide: https://github.com/fofr/cog-comfyui

241.3K runs

fofr/sdxl-multi-controlnet-lora

Multi-controlnet, lora loading, img2img, inpainting

173.8K runs

lucataco/dreamshaper-xl-turbo

DreamShaper is a general purpose SD model that aims at doing everything well, photos, art, anime, manga. It's designed to match Midjourney and DALL-E.

123.4K runs

lucataco/open-dalle-v1.1

A unique fusion that showcases exceptional prompt adherence and semantic understanding, it seems to be a step above base SDXL and a step closer to DALLE-3 in terms of prompt comprehension

86.9K runs

ai-forever/kandinsky-2-1

Kandinsky 2.1 Diffusion Model

82.1K runs

nightmareai/disco-diffusion

Generate images using a variety of techniques - Powered by Discoart

64K runs

adirik/realvisxl-v3.0-turbo

Photorealism with RealVisXL V3.0 Turbo based on SDXL

57.3K runs

lucataco/pixart-xl-2

PixArt-Alpha 1024px is a transformer-based text-to-image diffusion system trained on text embeddings from T5

42.1K runs

adirik/realvisxl-v4.0

Photorealism with RealVisXL V4.0

16.5K runs

lucataco/proteus-v0.3

ProteusV0.3: The Anime Update

16.3K runs

lucataco/thinkdiffusionxl

ThinkDiffusionXL is a go-to model capable of amazing photorealism that's also versatile enough to generate high-quality images across a variety of styles and subjects without needing to be a prompting genius

13.3K runs

lucataco/realistic-vision-v5

Realistic Vision v5.0 with VAE

12.2K runs

artificialguybr/nebul.redmond

Nebul.Redmond - Stable Diffusion SD XL Finetuned Model

10.8K runs

fofr/txt2img

Many models: RealVisXL, Juggernaut, Proteus, DreamShaper, etc.

8.1K runs

adirik/kosmos-g

Kosmos-G: Generating Images in Context with Multimodal Large Language Models

3.6K runs

lucataco/playground-v2

Playground v2 is a diffusion-based text-to-image generative model trained from scratch. Try out all 3 models here

3.2K runs

adirik/masactrl-sdxl

Editable image generation with MasaCtrl-SDXL

3.1K runs

lucataco/sdxl-deepcache

SDXL using DeepCache

3.1K runs