Explore
Featured models

black-forest-labs / flux-dev-lora
A version of flux-dev, a text to image model, that supports fast fine-tuned lora inference

fofr / wan2.1-with-lora
Run Wan2.1 14b or 1.3b with a lora

google-deepmind / gemma-3-27b-it
Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models.

wavespeedai / wan-2.1-i2v-480p
Accelerated inference for Wan 2.1 14B image to video, a comprehensive and open suite of video foundation models that pushes the boundaries of video generation.

wavespeedai / wan-2.1-t2v-480p
Accelerated inference for Wan 2.1 14B text to video, a comprehensive and open suite of video foundation models that pushes the boundaries of video generation.

wavespeedai / wan-2.1-t2v-720p
Accelerated inference for Wan 2.1 14B text to video with high resolution, a comprehensive and open suite of video foundation models that pushes the boundaries of video generation.

wan-video / wan-2.1-1.3b
Generate 5s 480p videos. Wan is an advanced and powerful visual generation model developed by Tongyi Lab of Alibaba Group

ideogram-ai / ideogram-v2a
Like Ideogram v2, but faster and cheaper

anthropic / claude-3.7-sonnet
The most intelligent Claude model and the first hybrid reasoning model on the market (claude-3-7-sonnet-20250219)
I want to…
Make videos with Wan2.1
Generate videos with Wan2.1, the fastest and highest quality open-source video generation model.
Generate images
Models that generate images from text prompts
Generate videos
Models that create and edit videos
Caption images
Models that generate text from images
Transcribe speech
Models that convert speech to text
Use a face to make images
Make realistic images of people instantly
Generate text
Models that can understand and generate text
Upscale images
Upscaling models that create high-quality images from low-quality images
Use official models
Official models are always on, maintained, and have predictable pricing.
Restore images
Models that improve or restore images by deblurring, colorization, and removing noise
Enhance videos
Models that enhance videos with super-resolution, sound effects, motion capture and other useful production effects.
Generate speech
Convert text to speech
Caption videos
Model s that generate text from videos
Remove backgrounds
Models that remove backgrounds from images and videos
Use handy tools
Toolbelt-type models for videos and images.
Detect objects
Models that detect or segment objects in images and videos.
Generate music
Models to generate and modify music
Sing with voices
Voice-to-voice cloning and musical prosody
Make 3D stuff
Models that generate 3D objects, scenes, radiance fields, textures and multi-views.
Chat with images
Ask language models about images
Extract text from images
Optical character recognition (OCR) and text extraction
Get embeddings
Models that generate embeddings from inputs
Use the FLUX family of models
The FLUX family of text-to-image models from Black Forest Labs
Use FLUX fine-tunes
Browse the diverse range of fine-tunes the community has custom-trained on Replicate
Control image generation
Guide image generation with more than just text. Use edge detection, depth maps, and sketches to get the results you want.
Edit images
Tools for manipulating images.
Popular models
SDXL-Lightning by ByteDance: a fast text-to-image model that makes high-quality images in 4 steps
Practical face restoration algorithm for *old photos* or *AI-generated faces*
Robust face restoration algorithm for old photos/AI-generated faces
multilingual-e5-large: A multi-language text embedding model
Return CLIP features for the clip-vit-large-patch14 model
Real-ESRGAN with optional face correction and adjustable upscale
Latest models
Granite-Vision-3.2-2B is a compact and efficient vision-language model, specifically designed for visual document understanding.
Wan 2.1 1.3b Video to Video. Wan is a powerful visual generation model developed by Tongyi Lab of Alibaba Group
An upscaler based on tile and inpaint controlnets, aimed to preserve the original image while injecting more details.
Scaling Diffusion Models for High Resolution Textured 3D Assets Generation
No pages por DeepResearch. Este nuevo agente de investigación viene a resolver todo por ti, solo dale tiempo :D
Phi-4-multimodal-instruct is a lightweight open multimodal foundation model that leverages the language, vision, and speech research and datasets used for Phi-3.5 and 4.0 models.
Accelerated inference for Wan 2.1 14B image to video, a comprehensive and open suite of video foundation models that pushes the boundaries of video generation.
Accelerated inference for Wan 2.1 14B text to video, a comprehensive and open suite of video foundation models that pushes the boundaries of video generation.
Accelerated inference for Wan 2.1 14B image to video with high resolution, a comprehensive and open suite of video foundation models that pushes the boundaries of video generation.
Accelerated inference for Wan 2.1 14B text to video with high resolution, a comprehensive and open suite of video foundation models that pushes the boundaries of video generation.
A release preview of the olmOCR model from Ai2 that's fine tuned from Qwen2-VL-7B-Instruct using the olmOCR-mix-0225 dataset
Generate 5s 480p videos. Wan is an advanced and powerful visual generation model developed by Tongyi Lab of Alibaba Group
Like Ideogram v2 turbo, but now faster and cheaper
State of the art video generation model. Veo 2 can faithfully follow simple and complex instructions, and convincingly simulates real-world physics as well as a wide range of visual styles.
In-Context LoRA with Image-to-Image and Inpainting to apply your logo to anything
The most intelligent Claude model and the first hybrid reasoning model on the market (claude-3-7-sonnet-20250219)
Generate high-quality videos from text prompts using StepVideo