const replicate = new Replicate({
auth: process.env.REPLICATE_API_TOKEN
})
const model =
const input = {
prompt:
};
const [output] = await replicate.run(model, { input });
console.log(output);
With Replicate you can
ideogram-ai/ideogram-v2a
Like Ideogram v2, but faster and cheaper
1.2M runs
ideogram-ai/ideogram-v2a-turbo
Like Ideogram v2 turbo, but now faster and cheaper
332.5K runs
minimax/image-01
Minimax's first image model, with character reference support
1.2M runs
bytedance/seedream-3
A text-to-image model with support for native high-resolution (2K) image generation
471.5K runs
luma/photon
High-quality image generation model optimized for creative professional workflows and ultra-high fidelity outputs
1.8M runs
luma/photon-flash
Accelerated variant of Photon prioritizing speed while maintaining quality
139.4K runs
prunaai/hidream-l1-full
This is an optimised version of the hidream-full model using the pruna ai optimisation toolkit!
27K runs
google/imagen-4-fast
Use this fast version of Imagen 4 when speed and cost are more important than quality
326.5K runs
google/imagen-4-ultra
Use this ultra version of Imagen 4 when quality matters more than speed and cost
240.8K runs
nvidia/sana-sprint-1.6b
SANA-Sprint: One-Step Diffusion with Continuous-Time Consistency Distillation
697.9K runs
prunaai/hidream-l1-dev
This is an optimised version of the hidream-l1-dev model using the pruna ai optimisation toolkit!
41.5K runs
prunaai/hidream-l1-fast
This is an optimised version of the hidream-l1 model using the pruna ai optimisation toolkit!
2.1M runs
prunaai/flux.1-dev
This is the fastest Flux Dev endpoint in the world, contact us for more at pruna.ai
14.7M runs
prunaai/sdxl-lightning
This is the fastest sdxl-lightning endpoint in the world on A100, contact us for more at pruna.ai
315 runs
ideogram-ai/ideogram-v3-quality
The highest quality Ideogram v3 model. v3 creates images with stunning realism, creative designs, and consistent styles
558K runs
ideogram-ai/ideogram-v3-turbo
Turbo is the fastest and cheapest Ideogram v3. v3 creates images with stunning realism, creative designs, and consistent styles
797.5K runs
ideogram-ai/ideogram-v3-balanced
Balance speed, quality and cost. Ideogram v3 creates images with stunning realism, creative designs, and consistent styles
160K runs
black-forest-labs/flux-dev-lora
A version of flux-dev, a text to image model, that supports fast fine-tuned lora inference
3.7M runs
bria/image-3.2
Commercial-ready, trained entirely on licensed data, text-to-image model. With only 4B parameters provides exceptional aesthetics and text rendering. Evaluated to be on par to other leading models in the market
2K runs
google/imagen-4
Google's Imagen 4 flagship model
1.7M runs
black-forest-labs/flux-kontext-pro
A state-of-the-art text-based image editing model that delivers high-quality outputs with excellent prompt following and consistent results for transforming images through natural language
15.3M runs
black-forest-labs/flux-kontext-max
A premium text-based image editing model that delivers maximum performance and improved typography generation for transforming images through natural language prompts
3.9M runs
prunaai/wan-2.2-image
This model generates beautiful cinematic 2 megapixel images in 3-4 seconds and is derived from the Wan 2.2 model through optimisation techniques from the pruna package
37.7K runs
google/imagen-3
Google's highest quality text-to-image model, capable of generating images with detail, rich lighting and beauty
1.4M runs
google/imagen-3-fast
A faster and cheaper Imagen 3 model, for when price or speed are more important than final image quality
308.9K runs
ai-forever/kandinsky-2
text2img model trained on LAION HighRes and fine-tuned on internal datasets
6.2M runs
lucataco/ssd-1b
Segmind Stable Diffusion Model (SSD-1B) is a distilled 50% smaller version of SDXL, offering a 60% speedup while maintaining high-quality text-to-image generation capabilities
1M runs
fofr/any-comfyui-workflow
Run any ComfyUI workflow. Guide: https://github.com/replicate/cog-comfyui
6.5M runs
black-forest-labs/flux-dev
A 12 billion parameter rectified flow transformer capable of generating images from text descriptions
24.1M runs
ideogram-ai/ideogram-v2-turbo
A fast image model with state of the art inpainting, prompt comprehension and text rendering.
2.4M runs
datacte/proteus-v0.3
ProteusV0.3: The Anime Update
4.2M runs
ai-forever/kandinsky-2.2
multilingual text2image latent diffusion model
10M runs
fofr/sdxl-emoji
An SDXL fine-tune based on Apple Emojis
10.3M runs
bytedance/sdxl-lightning-4step
SDXL-Lightning by ByteDance: a fast text-to-image model that makes high-quality images in 4 steps
1B runs
ideogram-ai/ideogram-v2
An excellent image model with state of the art inpainting, prompt comprehension and text rendering
1.8M runs
fofr/realvisxl-v3-multi-controlnet-lora
RealVisXl V3 with multi-controlnet, lora loading, img2img, inpainting
1.8M runs
black-forest-labs/flux-pro
State-of-the-art image generation with top of the line prompt following, visual quality, image detail and output diversity.
12.7M runs
tstramer/material-diffusion
Stable diffusion fork for generating tileable outputs using v1.5 model
2.3M runs
fermatresearch/sdxl-controlnet-lora
'''Last update: Now supports img2img.''' SDXL Canny controlnet with LoRA support.
935.4K runs
lucataco/realistic-vision-v5.1
Implementation of Realistic Vision v5.1 with VAE
4.2M runs
stability-ai/sdxl
A text-to-image generative AI model that creates beautiful images
81.2M runs
stability-ai/stable-diffusion-3.5-large
A text-to-image model that generates high-resolution images with fine details. It supports various artistic styles and produces diverse outputs from the same prompt, thanks to Query-Key Normalization.
1.6M runs
black-forest-labs/flux-1.1-pro-ultra
FLUX1.1 [pro] in ultra and raw modes. Images are up to 4 megapixels. Use raw mode for realism.
16M runs
black-forest-labs/flux-1.1-pro
Faster, better FLUX Pro. Text-to-image model with excellent image quality, prompt adherence, and output diversity.
48.9M runs
stability-ai/stable-diffusion
A latent text-to-image diffusion model capable of generating photo-realistic images given any text input
110.6M runs
fofr/sticker-maker
Make stickers with AI. Generates graphics with transparent backgrounds.
1.4M runs
playgroundai/playground-v2.5-1024px-aesthetic
Playground v2.5 is the state-of-the-art open-source model in aesthetic quality
2.6M runs
jagilley/controlnet-scribble
Generate detailed images from scribbled drawings
38.3M runs
fofr/sdxl-multi-controlnet-lora
Multi-controlnet, lora loading, img2img, inpainting
212.6K runs
fofr/latent-consistency-model
Super-fast, 0.6s per image. LCM with img2img, large batching and canny controlnet
1.5M runs
lucataco/dreamshaper-xl-turbo
DreamShaper is a general purpose SD model that aims at doing everything well, photos, art, anime, manga. It's designed to match Midjourney and DALL-E.
224.6K runs
lucataco/open-dalle-v1.1
A unique fusion that showcases exceptional prompt adherence and semantic understanding, it seems to be a step above base SDXL and a step closer to DALLE-3 in terms of prompt comprehension
128.8K runs
adirik/realvisxl-v3.0-turbo
Photorealism with RealVisXL V3.0 Turbo based on SDXL
397K runs
datacte/proteus-v0.2
Proteus v0.2 shows subtle yet significant improvements over Version 0.1. It demonstrates enhanced prompt understanding that surpasses MJ6, while also approaching its stylistic capabilities.
10.5M runs
stability-ai/stable-diffusion-3.5-medium
2.5 billion parameter image model with improved MMDiT-X architecture
63.2K runs
black-forest-labs/flux-schnell
The fastest image generation model tailored for local development and personal use
441M runs
stability-ai/stable-diffusion-3.5-large-turbo
A text-to-image model that generates high-resolution images with fine details. It supports various artistic styles and produces diverse outputs from the same prompt, with a focus on fewer inference steps
727.7K runs
recraft-ai/recraft-v3
Recraft V3 (code-named red_panda) is a text-to-image model with the ability to generate long texts, and images in a wide list of styles. As of today, it is SOTA in image generation, proven by the Text-to-Image Benchmark by Artificial Analysis
4.8M runs
recraft-ai/recraft-v3-svg
Recraft V3 SVG (code-named red_panda) is a text-to-image model with the ability to generate high quality SVG images including logotypes, and icons. The model supports a wide list of styles.
203.8K runs
nvidia/sana
A fast image model with wide artistic range and resolutions up to 4096x4096
175.9K runs
All the latest models are on Replicate. They’re not just demos — they all actually work and have production-ready APIs.
AI shouldn’t be locked up inside academic papers and demos. Make it real by pushing it to Replicate.
openai/gpt-5
OpenAI's new model excelling at coding, writing, and reasoning.
6K runs
runwayml/gen4-image-turbo
Gen-4 Image Turbo is cheaper and 2.5x faster than Gen-4 Image. An image model with references, use up to 3 reference images to create the exact image you need. Capture every angle.
3.9K runs
wan-video/wan-2.2-t2v-fast
A very fast and cheap PrunaAI optimized version of Wan 2.2 A14B text-to-video
22.8K runs
bytedance/dreamina-3.1
4MP text-to-image generation with enhanced cinematic-quality image generation with precise style control, improved text rendering, and commercial design optimization.
10.7K runs
ideogram-ai/ideogram-character
Generate consistent characters from a single reference image. Outputs can be in many styles. You can also use inpainting to add your character to an existing image.
16.5K runs
runwayml/gen4-aleph
A new way to edit, transform and generate video
2.3K runs
openai/gpt-oss-120b
120b open-weight language model from OpenAI
40.8K runs
qwen/qwen-image
An image generation foundation model in the Qwen series that achieves significant advances in complex text rendering.
39.6K runs
minimax/hailuo-02-fast
A low cost and fast version of Hailuo 02. Generate 6s and 10s videos in 512p
4.1K runs
bytedance/omni-human
Turns your audio/video/images into professional-quality animated videos
2.4K runs
black-forest-labs/flux-krea-dev
An opinionated text-to-image model from Black Forest Labs in collaboration with Krea that excels in photorealism. Creates images that avoid the oversaturated "AI look".
51.6K runs
prunaai/wan-2.2-image
This model generates beautiful cinematic 2 megapixel images in 3-4 seconds and is derived from the Wan 2.2 model through optimisation techniques from the pruna package
37.7K runs
You can get started with any model with just one line of code. But as you do more complex things, you can fine-tune models or deploy your own custom code.
Our community has already published thousands of models that are ready to use in production. You can run these with one line of code.
import replicate
output = replicate.run(
"black-forest-labs/flux-dev",
input={
"aspect_ratio": "1:1",
"num_outputs": 1,
"output_format": "jpg",
"output_quality": 80,
"prompt": "An astronaut riding a rainbow unicorn, cinematic, dramatic",
}
)
print(output)
You can improve models with your own data to create new models that are better suited to specific tasks.
Image models like SDXL can generate images of a particular person, object, or style.
Train a model:
training = replicate.trainings.create(
destination="mattrothenberg/drone-art"
version="ostris/flux-dev-lora-trainer:e440909d3512c31646ee2e0c7d6f6f4923224863a6a10c494606e79fb5844497",
input={
"steps": 1000,
"input_images":
,
"trigger_word": "TOK",
},
)
This will result in a new model:
mattrothenberg/drone-art
Fantastical images of drones on land and in the sky
0 runs
mattrothenberg/drone-art
Fantastical images of drones on land and in the sky
0 runs
Then, you can run it with one line of code:
output = replicate.run(
"mattrothenberg/drone-art:abcde1234...",
input={"prompt": "a photo of TOK forming a rainbow in the sky"}),
)
You aren’t limited to the models on Replicate: you can deploy your own custom models using Cog, our open-source tool for packaging machine learning models.
Cog takes care of generating an API server and deploying it on a big cluster in the cloud. We scale up and down to handle demand, and you only pay for the compute that you use.
First, define the environment your model runs in with cog.yaml:
build:
gpu: true
system_packages:
- "libgl1-mesa-glx"
- "libglib2.0-0"
python_version: "3.10"
python_packages:
- "torch==1.13.1"
predict: "predict.py:Predictor"
Next, define how predictions are run on your model with predict.py:
from cog import BasePredictor, Input, Path
import torch
class Predictor(BasePredictor):
def setup(self):
"""Load the model into memory to make running multiple predictions efficient"""
self.model = torch.load("./weights.pth")
# The arguments and types the model takes as input
def predict(self,
image: Path = Input(description="Grayscale input image")
) -> Path:
"""Run a single prediction on the model"""
processed_image = preprocess(image)
output = self.model(processed_image)
return postprocess(output)
Thousands of businesses are building their AI products on Replicate. Your team can deploy an AI feature in a day and scale to millions of users, without having to be machine learning experts.
Learn more about our enterprise plansIf you get a ton of traffic, Replicate scales up automatically to handle the demand. If you don't get any traffic, we scale down to zero and don't charge you a thing.
Replicate only bills you for how long your code is running. You don't pay for expensive GPUs when you're not using them.
Deploying machine learning models at scale is hard. If you've tried, you know. API servers, weird dependencies, enormous model weights, CUDA, GPUs, batching.
Prediction throughput (requests per second)
Metrics let you keep an eye on how your models are performing, and logs let you zoom in on particular predictions to debug how your model is behaving.
With Replicate and tools like Next.js and Vercel, you can wake up with an idea and watch it hit the front page of Hacker News by the time you go to bed.