Explore
Featured models

prunaai / vace-14b
This is VACE-14B model optimised with pruna ai. Wan2.1 VACE is an all-in-one model for video creation and editing.

kwaivgi / kling-v2.0
Generate 5s and 10s videos in 720p resolution

pixverse / pixverse-v4.5
Quickly make 5s or 8s videos at 540p, 720p or 1080p. It has enhanced motion, prompt coherence and handles complex actions well.

zsxkib / framepack
🕹️FramePack: video diffusion that feels like image diffusion🎥

lucataco / ace-step
A Step Towards Music Generation Foundation Model text2music

minimax / speech-02-hd
Text-to-Audio (T2A) that offers voice synthesis, emotional expression, and multilingual capabilities. Optimized for high-fidelity applications like voiceovers and audiobooks.

minimax / voice-cloning
Clone voices to use with Minimax's speech-02-hd and speech-02-turbo

ideogram-ai / ideogram-v3-turbo
Turbo is the fastest and cheapest Ideogram v3. v3 creates images with stunning realism, creative designs, and consistent styles

ideogram-ai / ideogram-v3-quality
The highest quality Ideogram v3 model. v3 creates images with stunning realism, creative designs, and consistent styles
Official models
Official models are always on, maintained, and have predictable pricing.

Fine-tune FLUX
Customize FLUX.1 [dev] with Ostris's AI Toolkit on Replicate. Train the model to recognize and generate new concepts using a small set of example images, for specific styles, characters, or objects. (Generated with davisbrown/flux-half-illustration.)
I want to…
Make videos with Wan2.1
Generate videos with Wan2.1, the fastest and highest quality open-source video generation model.
Generate images
Models that generate images from text prompts
Generate videos
Models that create and edit videos
Caption images
Models that generate text from images
Transcribe speech
Models that convert speech to text
Generate speech
Convert text to speech
Use handy tools
Toolbelt-type models for videos and images.
Upscale images
Upscaling models that create high-quality images from low-quality images
Restore images
Models that improve or restore images by deblurring, colorization, and removing noise
Use a face to make images
Make realistic images of people instantly
Edit images
Tools for manipulating images.
Caption videos
Models that generate text from videos
Generate text
Models that can understand and generate text
Use official models
Official models are always on, maintained, and have predictable pricing.
Enhance videos
Models that enhance videos with super-resolution, sound effects, motion capture and other useful production effects.
Remove backgrounds
Models that remove backgrounds from images and videos
Detect objects
Models that detect or segment objects in images and videos.
Generate music
Models to generate and modify music
Sing with voices
Voice-to-voice cloning and musical prosody
Make 3D stuff
Models that generate 3D objects, scenes, radiance fields, textures and multi-views.
Chat with images
Ask language models about images
Extract text from images
Optical character recognition (OCR) and text extraction
Get embeddings
Models that generate embeddings from inputs
Use the FLUX family of models
The FLUX family of text-to-image models from Black Forest Labs
Use FLUX fine-tunes
Browse the diverse range of fine-tunes the community has custom-trained on Replicate
Control image generation
Guide image generation with more than just text. Use edge detection, depth maps, and sketches to get the results you want.
Popular models
SDXL-Lightning by ByteDance: a fast text-to-image model that makes high-quality images in 4 steps
Generate CLIP (clip-vit-large-patch14) text & image embeddings
Return CLIP features for the clip-vit-large-patch14 model
Practical face restoration algorithm for *old photos* or *AI-generated faces*
A text-to-image generative AI model that creates beautiful images
Latest models
Granite-Embedding-278M-Multilingual is a 278M parameter model from the Granite Embeddings suite that can be used to generate high quality text embeddings
This is an optimised version of the flux schnell model from black forest labs with the pruna tool. We achieve a ~3x speedup over the original model with minimal quality loss.
This is VACE-14B model optimised with pruna ai. Wan2.1 VACE is an all-in-one model for video creation and editing.
Generate 5s and 10s videos in 720p resolution at 30fps
A version of flux-dev, a text to image model, that supports fast fine-tuned lora inference
Create videos in as little as 10 seconds. 5s or 8s videos at 360p, 540p, 720p or 1080p.
Revival of https://github.com/pollinations/stable-diffusion-audio-reactive
An AI system that can create realistic images and art from a description in natural language.
This is VACE-1.3B model optimised with pruna ai. Wan2.1 VACE is an all-in-one model for video creation and editing.
Scaling Diffusion Models for High Resolution Textured 3D Assets Generation
Quickly make 5s or 8s videos at 540p, 720p or 1080p. It has enhanced motion, prompt coherence and handles complex actions well.
Quickly generate smooth 5s or 8s videos at 540p, 720p or 1080p
DiT-based 13b video generation model, creating 30fps video
👗Bytedance's DreamO: unified image customization model (IP, ID, Style, Try-On, etc.)🧣
Run any ComfyUI workflow. Guide: https://github.com/replicate/cog-comfyui
This model generates pose variation of a cartoon character. It preserves the cartoon identity. Use this model to augment training dataset for any cartoon character created through AI. The augmented dataset can be used to train a LoRA model.
GPU accelerated replay renderer / video data clipper for comma.ai connect's openpilot route data. SEE README.
Uses DINO to detect regions and further refines them with SAM. Returns masking data as RLE encoded JSON.
FramePack video generation with image + motion prompt. Based on Stanford's 2025 model.
An enhanced version of sd-interior-design, featuring improved diffusion model
Generate 5s and 9s 720p videos, faster and cheaper than Ray 2
Generate 5s and 9s 540p videos, faster and cheaper than Ray 2
Text-to-Audio (T2A) that offers voice synthesis, emotional expression, and multilingual capabilities. Designed for real-time applications with low latency
Text-to-Audio (T2A) that offers voice synthesis, emotional expression, and multilingual capabilities. Optimized for high-fidelity applications like voiceovers and audiobooks.