

andreasjansson / clip-features
Return CLIP features for the clip-vit-large-patch14 model
147.1M runs


prunaai / p-image-edit
A sub 1 second 0.01$ multi-image editing model built for production use cases. For image generation, check out p-image here: https://replicate.com/prunaai/p-image
23.9M runs


vaibhavs10 / incredibly-fast-whisper
whisper-large-v3, incredibly fast, powered by Hugging Face Transformers! 🤗
29.7M runs


prunaai / z-image-turbo
Z-Image Turbo is a super fast text-to-image model of 6B parameters developed by Tongyi-MAI.
37M runs

Highest-quality text-to-speech with <200ms latency, emotion control, and 15-language support
20.9K runs

bytedance/seedream-5-liteSeedream 5.0 lite: image generation with built-in reasoning, example-based editing, and deep domain knowledge
584.7K runs
runwayml/gen-4.5State-of-the-art video motion quality, prompt adherence and visual fidelity
64.5K runs

recraft-ai/recraft-v4Recraft's latest image generation model, built around design taste. Strong prompt accuracy, art-directed composition, and integrated text rendering. Fast and cost-efficient at standard resolution.
184.9K runs
Generate videos using xAI's Grok Imagine Video model
344K runs

Moonshot AI's latest open model. It unifies vision and text, thinking and non-thinking modes, and single-agent and multi-agent execution into one model
28.4K runs

Google's most intelligent model built for speed with frontier intelligence, superior search, and grounding
914K runs
prunaai/p-videoFast video generation with built-in draft mode for rapid creative iteration. Text-to-video, image-to-video, and audio-to-video in a single endpoint.
427.5K runs

Very fast image generation and editing model. 4 steps distilled, sub-second inference for production and near real-time applications.
9.4M runs

openai/gpt-image-1.5OpenAI's latest image generation model with better instruction following and adherence to prompts
7.2M runs

Google's fast image generation model with conversational editing, multi-image fusion, and character consistency
4.7M runs

Compose a song from a prompt or a composition plan
30.6K runs
Official models are always on, maintained, and have predictable pricing.

Generate and edit high-quality images with Alibaba's Wan 2.7 Pro with 4K output, thinking mode, text-to-image, multi-image editing, and image set generation

Generate and edit images with Alibaba's Wan 2.7

Google's cost-efficient video generation model with native audio, optimized for high-volume applications
Edit videos with natural language instructions using Alibaba's Wan 2.7 VideoEdit model

Generate videos from images, with support for first-and-last-frame control, clip continuation, and audio synchronization using Alibaba's Wan 2.7 model
Generate videos with audio from text prompts using Alibaba's Wan 2.7 model. 1080p, up to 15 seconds, with audio synchronization.
Generate videos guided by reference images using xAI's Grok Imagine Video model
Extend videos with xAI's Grok Imagine Video model. Provide a source video and describe what happens next.

Ultra-fast, cost-efficient text-to-speech with ~120ms latency and 15-language support

Highest-quality text-to-speech with <200ms latency, emotion control, and 15-language support
High-fidelity video generation with portrait support, audio-to-video, retake, and extend. Text, image, and audio-driven creation up to 4K at 50 FPS.

OpenAI's most capable frontier model for complex professional work, coding, and multi-step reasoning.
Lightning-fast video generation with portrait support, camera controls, and synchronized audio. Up to 20 seconds at 1080p, 4K at 50 FPS.
Kling 3.0 motion control: transfer motion from a reference video to any character image with improved consistency and quality.
Fast video generation with text-to-video, image-to-video, and start-end-to-video modes. Up to 16 seconds at 1080p with synchronized audio.

The pro version of Qwen Image 2 from Alibaba's Qwen team. Enhanced text rendering, realism, and semantic adherence for high-quality image generation and editing.

A next-generation image generation and editing model from Alibaba's Qwen team. Supports text-to-image and image editing with strong text rendering, especially for Chinese.
Create realistic talking avatar videos from text with HeyGen's Avatar IV engine

Generate full-length songs with vocals, lyrics, and rich instrumentation from a text prompt
High-fidelity video generation with text-to-video, image-to-video, and start-end-to-video modes. Up to 16 seconds at 1080p with synchronized audio.
Use AI to generate images & photos with an API
Use AI to caption videos with an API
Use AI for text-to-speech or to clone your voice via API
Use AI to generate images from a face with an API
Use AI to generate videos with an API
Use AI to upscale images with super resolution with an API
Use AI to generate music with an API
Use AI to edit any image via API
Use AI to transcribe speech to text via API
Use AI For Optical Character Recognition (OCR) to extract text from images via API
Use AI to remove backgrounds from images and videos with an API
FLUX AI models: advanced image generation & editing via API
Use AI to restore images via API
Use AI to enhance videos via API - Replicate
Detect NSFW content in images and text
Classify text by sentiment, topic, intent, or safety
Identify speakers from audio and video inputs
Replace faces across images with natural-looking results.
Transform rough sketches into polished visuals
Generate custom emojis from text or images
Create anime-style characters, scenes, and animations
Use AI to Generate Videos from Images with API
Official models are always on, predictably priced, and have a stable API.
Explore Large Language Models (LLMs) for chat, generation & NLP tasks via API
Try AI Models for free: video generation, image generation, upscaling, and photo restoration
Use AI to generate lipsync videos with an API
Use AI to create 3D content with an API
Chat with images for understanding, captioning & detection via API
Use AI to control image generation with an API
Embedding models for AI search and analysis
Use AI to edit your videos with an API
Use AI object detection and segmentation models to distinguish objects in images & videos
Flux fine-tunes: build and run custom AI image models via API
Kontext fine-tunes: Build custom AI image models with an API
Create songs with voice cloning models via API
AI media utilities: auto-caption, watermark, frame extraction & more via API
Browse the diverse range of qwen-image fine-tunes the community has custom-trained on Replicate.
WAN family of models: powerful image-to-video & text-to-video models
Use AI To Caption Images with an API

wan-video / wan-2.7-image-pro
Generate and edit high-quality images with Alibaba's Wan 2.7 Pro with 4K output, thinking mode, text-to-image, multi-image editing, and image set generation
718 runs

wan-video / wan-2.7-image
Generate and edit images with Alibaba's Wan 2.7
249 runs

marestreetmarket / multichannel
15 runs

google / veo-3.1-lite
Google's cost-efficient video generation model with native audio, optimized for high-volume applications
178 runs
wan-video / wan-2.7-videoedit
Edit videos with natural language instructions using Alibaba's Wan 2.7 VideoEdit model
56 runs

wan-video / wan-2.7-i2v
Generate videos from images, with support for first-and-last-frame control, clip continuation, and audio synchronization using Alibaba's Wan 2.7 model
211 runs
wan-video / wan-2.7-t2v
Generate videos with audio from text prompts using Alibaba's Wan 2.7 model. 1080p, up to 15 seconds, with audio synchronization.
33 runs


visionaix / metric3dv2
Metric3D v2 (TPAMI 2024): Monocular metric depth and surface normals from a single image. Predicts real-world depth in meters. Works indoor and outdoor.
12 runs


tomhermans / theretroposter01
31 runs

marestreetmarket / albedo
38 runs


palomamachado-png / palomacalazans
50 runs


jacquiedeering / jacquiedigital
26 runs