Explore

I want to…

Make videos with Wan2.1

Generate videos with Wan2.1, the fastest and highest quality open-source video generation model.

Restore images

Models that improve or restore images by deblurring, colorization, and removing noise

Enhance videos

Models that enhance videos with super-resolution, sound effects, motion capture and other useful production effects.

Detect objects

Models that detect or segment objects in images and videos.

Make 3D stuff

Models that generate 3D objects, scenes, radiance fields, textures and multi-views.

Use FLUX fine-tunes

Browse the diverse range of fine-tunes the community has custom-trained on Replicate

Control image generation

Guide image generation with more than just text. Use edge detection, depth maps, and sketches to get the results you want.

Latest models

Mediapipe Blendshape Labeler - Predicts the blend shapes of an image.

Updated 202 runs

Fast FLUX DEV -> Flux Controlnet Canny, Controlnet Depth , Controlnet Line Art, Controlnet Upscaler - You can use just one controlnet or All - LORAs: HyperFlex LoRA , Add Details LoRA , Realism LoRA

Updated 120.5K runs

Qwen2.5-Omni is an end-to-end multimodal model designed to perceive diverse modalities, including text, images, audio, and video, while simultaneously generating text and natural speech responses in a streaming manner.

Updated 6 runs

Fine-tune GR00T-N1 from Nvidia on a LeRobot dataset

Updated 78 runs

Updated 20.6K runs

Vietnamese F5-TTS. Released by EraX-AI team

Updated 37 runs

Scaling Diffusion Models for High Resolution Textured 3D Assets Generation

Updated 3.2K runs

Controllable generative AI art

Updated 435 runs

Updated 42 runs

Generates realistic talking face animations from a portrait image and audio using the CVPR 2025 Sonic model

Updated 41 runs

Transform your portrait photos into any style or setting while preserving your facial identity

Updated 1.6K runs

Wan 2.1 1.3b Video to Video. Wan is a powerful visual generation model developed by Tongyi Lab of Alibaba Group

Updated 238 runs

Easily create video datasets with auto-captioning for Hunyuan-Video LoRA finetuning

Updated 526 runs

Cost-optimized MMAudio V2 (T4 GPU): Add sound to video using this version running on T4 hardware for lower cost. Synthesizes high-quality audio from video content.

Updated 14 runs

Add sound to video using the MMAudio V2 model. An advanced AI model that synthesizes high-quality audio from video content, enabling seamless video-to-audio transformation.

Updated 267.2K runs

A redux adapter trained from scratch on Flex.1-alpha, that also works with FLUX.1-dev

Updated 20 runs

Run any ComfyUI workflow. Guide: https://github.com/replicate/cog-comfyui

Updated 3M runs

Updated 19 runs

Indic Parler-TTS Pretrained is a multilingual Indic extension of Parler-TTS Mini.

Updated 32 runs

SANA-Sprint: One-Step Diffusion with Continuous-Time Consistency Distillation

Updated 7.9K runs

flux_schnell model img2img inference

Updated 52.5K runs

flux dev

Updated 93K runs

black-forest-labs/flux-dev-lora

A version of flux-dev, a text to image model, that supports fast fine-tuned lora inference

Updated 670.2K runs

black-forest-labs/flux-schnell-lora

The fastest image generation model tailored for fine-tuned use

Updated 949.2K runs

black-forest-labs/flux-fill-dev

Open-weight inpainting model for editing and extending images. Guidance-distilled from FLUX.1 Fill [pro].

Updated 234.1K runs

black-forest-labs/flux-1.1-pro-ultra

FLUX1.1 [pro] in ultra and raw modes. Images are up to 4 megapixels. Use raw mode for realism.

Updated 9.1M runs

black-forest-labs/flux-1.1-pro

Faster, better FLUX Pro. Text-to-image model with excellent image quality, prompt adherence, and output diversity.

Updated 24.8M runs

black-forest-labs/flux-pro

State-of-the-art image generation with top of the line prompt following, visual quality, image detail and output diversity.

Updated 10.7M runs

black-forest-labs/flux-fill-pro

Professional inpainting and outpainting model with state-of-the-art performance. Edit or extend images with natural, seamless results.

Updated 767.1K runs

black-forest-labs/flux-canny-pro

Professional edge-guided image generation. Control structure and composition using Canny edge detection

Updated 172.3K runs

black-forest-labs/flux-depth-pro

Professional depth-aware image generation. Edit images while preserving spatial relationships.

Updated 125.5K runs

Fine-tune FLUX.1-dev using ai-toolkit

Updated 516.2K runs

Simple binary sentiment analysis with BERT

Updated 13 runs

TripoSG unofficial implementation

Updated 53 runs

Updated 314 runs

Updated 227 runs

Updated 34 runs

An optimized version of sdxl-lightning from Bytedance that is more than 2x faster and 2x cheaper

Updated 9 runs

This a pruna optimised version of the flux 1.dev model.

Updated 6.1K runs

For the paper "Structured 3D Latents for Scalable and Versatile 3D Generation".

Updated 219 runs

This is an optimised version of the flux schnell model from black forest labs with the pruna tool. We achieve a ~3x speedup over the original model with minimal quality loss.

Updated 152 runs

This model is an optimised version of stable-diffusion by stability AI that is 3x faster and 3x cheaper.

Updated 133 runs

Updated 2.8K runs

A model Flux.1-dev-Controlnet-Upscaler by www.androcoders.in

Updated 81 runs

wavespeedai/wan-2.1-t2v-480p

Accelerated inference for Wan 2.1 14B text to video, a comprehensive and open suite of video foundation models that pushes the boundaries of video generation.

Updated 28.9K runs

wavespeedai/wan-2.1-t2v-720p

Accelerated inference for Wan 2.1 14B text to video with high resolution, a comprehensive and open suite of video foundation models that pushes the boundaries of video generation.

Updated 8.6K runs

wavespeedai/wan-2.1-i2v-480p

Accelerated inference for Wan 2.1 14B image to video, a comprehensive and open suite of video foundation models that pushes the boundaries of video generation.

Updated 81.3K runs

wavespeedai/wan-2.1-i2v-720p

Accelerated inference for Wan 2.1 14B image to video with high resolution, a comprehensive and open suite of video foundation models that pushes the boundaries of video generation.

Updated 20.9K runs