Machine learning doesn’t need to be so hard.

Run models in the cloud at scale.

01
Run

Replicate lets you run machine learning models with a few lines of code, without needing to understand how machine learning works.

Use our Python library:

import replicate
model = replicate.models.get("stability-ai/stable-diffusion")
version = model.versions.get("db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf")
version.predict(prompt="an astronaut riding on a horse")

...or query the API directly with your tool of choice:

$ curl -s -X POST \
    -d '{"version": "db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf", \
         "input": { "prompt": "an astronaut riding on a horse" } }' \
    -H "Authorization: Token $REPLICATE_API_TOKEN" \
    -H 'Content-Type: application/json' \
    https://api.replicate.com/v1/predictions

Thousands of models, ready to use

Machine learning can do some extraordinary things. Replicate's community of machine learning hackers have shared thousands of models that you can run.

Image to text

Image and video generation models trained with diffusion processes

Text to image

Image and video generation models trained with diffusion processes

Explore models or, learn more about our API

02
Push

You're building new products with machine learning. You don't have time to fight Python dependency hell, get mired in GPU configuration, or cobble together a Dockerfile.

That's why we built Cog, an open-source tool that lets you package machine learning models in a standard, production-ready container.

First, define the environment your model runs in with cog.yaml:

build:
  gpu: true
  system_packages:
    - "libgl1-mesa-glx"
    - "libglib2.0-0"
  python_version: "3.10"
  python_packages:
    - "torch==1.13.1"
predict: "predict.py:Predictor"

Next, define how predictions are run on your model with predict.py:

from cog import BasePredictor, Input, Path
import torch

class Predictor(BasePredictor):
    def setup(self):
        """Load the model into memory to make running multiple predictions efficient"""
        self.model = torch.load("./weights.pth")

    # The arguments and types the model takes as input
    def predict(self,
          image: Path = Input(description="Grayscale input image")
    ) -> Path:
        """Run a single prediction on the model"""
        processed_image = preprocess(image)
        output = self.model(processed_image)
        return postprocess(output)

Now, you can run predictions on this model locally:

$ cog predict -i @input.jpg
--> Building Docker image...
--> Running Prediction...
--> Output written to output.jpg

Or, build a Docker image for deployment:

$ cog build -t my-colorization-model
--> Building Docker image...
--> Built my-colorization-model:latest

Finally, push your model to Replicate, and you can run it in the cloud with a few lines of code:

$ cog push
Pushed model to replicate.com/your-username/my-colorization-model
import replicate
model = replicate.models.get("your-username/my-colorization-model")
version = model.versions.get("your-model-version-id")
version.predict(image=open("input.jpg"))

Push a model or, learn more about Cog

03
Scale

Deploying machine learning models at scale is horrible. If you've tried, you know. API servers, weird dependencies, enormous model weights, CUDA, GPUs, batching. If you're building a product fast, you don't want to be dealing with this stuff.

Replicate makes it easy to deploy machine learning models. You can use open-source models off the shelf, or you can deploy your own custom, private models at scale.

  • Automatic API

    Define your model with Cog, and we'll automatically generate a scalable API server for it with standard practices and deploy on a big cluster of GPUs.

  • Automatic scale

    If you get a ton of traffic, Replicate scales up automatically to handle the demand. If you don't get any traffic, we scale down to zero and don't charge you a thing.

  • Pay by the second

    Replicate only bills you for how long your code is running. You don't pay for expensive GPUs when you're not using them.

Get started or, learn more about us