Push a model to Replicate
Learn how to package your own trained model using Cog and push it to Replicate. By the end of this guide your model will have an interactive GUI and its own HTTP API. You'll also have the option to publicly share your model so anyone can try it.

Prerequisites
- A trained model in a directory on your computer. Your model's saved weights, alongside any code that is needed to run it. If you don't already have your own trained model, you can use one from replicate/cog-examples.
- Docker. You'll be using the Cog command-line tool to build and push your model. Cog uses Docker to create a container for your model. You'll need to install and start Docker before you can run Cog. You can confirm Docker is running by typing
docker info
in your terminal. - If your model needs a GPU, a Linux machine with an NVIDIA GPU attached and the NVIDIA Container Toolkit installed. If you don't already have access to a machine with a GPU, check out our guide to getting a GPU machine. If you just need a CPU for inference, you can also use macOS.
- An account on Replicate.
Create a model page on Replicate
Next you'll create a page for your model on Replicate. Visit replicate.com/create to choose a name for your model, and specify whether it should be public or private.
Install Cog
Cog is an open source tool that makes it easy to put a machine learning model in a Docker container. Run the following command to install it:
sudo curl -o /usr/local/bin/cog -L https://github.com/replicate/cog/releases/latest/download/cog_`uname -s`_`uname -m`
sudo chmod +x /usr/local/bin/cog
Refer to GitHub for more information about Cog and its full documentation.
Initialize Cog
To configure your project for use with Cog, you'll need to add two files to the directory containing your model:
cog.yaml
defines system requirements, Python package dependencies, etcpredict.py
describes the prediction interface for your model
Use the cog init
command to generate these files in your project:
cd path/to/your/model
cog init
Define your dependencies
The cog.yaml
file defines all of the different things that need to be installed for your model to run. You can think of it as a simple way of defining a Docker image.
For example:
build:
python_version: "3.8"
python_packages:
- "torch==1.7.0"
This will generate a Docker image with Python 3.8 and PyTorch 1.7 installed and various other sensible best practices.
Using GPUs
To use GPUs, add the gpu: true
option to the build
section of your cog.yaml
:
build:
gpu: true
# ...
Cog will use the nvidia-docker base image and automatically figure out what versions of CUDA and cuDNN to use based on the version of Python, PyTorch, and Tensorflow that you're using.
Running commands
To run a command inside this environment, prefix it with cog run
:
$ cog run python
✓ Building Docker image from cog.yaml... Successfully built 8f54020c8981
Running 'python' in Docker with the current directory mounted as a volume...
────────────────────────────────────────────────────────────────────────────────────────
Python 3.8.10 (default, May 12 2021, 23:32:14)
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>
This is handy for ensuring a consistent environment for development or training.
With cog.yaml
, you can also install system packages and other things. Take a look at the full reference to explore what else you can do.
Define how to run predictions
The next step is to update predict.py
to define the interface for running predictions on your model. The predict.py
generated by cog init
looks something like this:
from cog import BasePredictor, Path, Input
import torch
class Predictor(BasePredictor):
def setup(self):
"""Load the model into memory to make running multiple predictions efficient"""
self.net = torch.load("weights.pth")
def predict(self,
image: Path = Input(description="Image to enlarge"),
scale: float = Input(description="Factor to scale image by", default=1.5)
) -> Path:
"""Run a single prediction on the model"""
# ... pre-processing ...
output = self.net(input)
# ... post-processing ...
return output
Edit your predict.py
file and fill in the functions with your own model's setup and prediction code. You might need to import parts of your model from another file.
You should keep your model weights in the same directory as your predict.py
file, or a subdirectory underneath it, and load them directly off disk in your setup()
function, as shown in the example above. This will make it more efficient to load and easier to version because it will get copied into the Docker image that Cog produces.
You also need to define the inputs to your model as arguments to the predict()
function, as demonstrated above. For each argument, you need to annotate with a type. The supported types are:
str
: a stringint
: an integerfloat
: a floating point numberbool
: a booleancog.File
: a file-like object representing a filecog.Path
: a path to a file on disk
You can provide more information about the input with the Input()
function, as shown above. It takes these basic arguments:
description
: A description of what to pass to this input for users of the modeldefault
: A default value to set the input to. If this argument isn't passed, then the input is required. If it's explicitly set toNone
, the input is optional.ge
: Forint
orfloat
types, the value should be greater than or equal to this number.le
: Forint
orfloat
types, the value should be less than or equal to this number.choices
: Forstr
orint
types, a list of possible values for this input.
There are some more advanced options you can pass, too. For more details, refer to the prediction interface documentation.
Next, add the line predict: "predict.py:Predictor"
to your cog.yaml
, so it looks something like this:
build:
python_version: "3.8"
python_packages:
- "torch==1.7.0"
predict: "predict.py:Predictor"
That's it!
Test your model locally
To test that this works, try running a prediction on the model:
$ cog predict -i image=@input.jpg
✓ Building Docker image from cog.yaml... Successfully built 664ef88bc1f4
✓ Model running in Docker image 664ef88bc1f4
Written output to output.png
To pass more inputs to the model, you can add more -i
options:
$ cog predict -i image=@image.jpg -i scale=2.0
In this case it's just a number, not a file, so you don't need the @
prefix.
Push your model
Now that you've configured your model for use with Cog and you have a corresponding model page on Replicate, it's time to publish it to Replicate's registry:
cog login
cog push r8.im/<your-username>/<your-model-name>
Your username and model name must match the values you set on Replicate.
Note: You can also set the image property in your cog.yaml
file. This allows you to run cog push
without specifying the image, and also makes your model page on Replicate more discoverable for folks reading your model's source code.
Run predictions
Once you've pushed your model to Replicate it will be visible on the website, and you can use the web-based form to run predictions using your model.
To run predictions in the cloud from your code, you can use the Python client library.
Install it from pip:
pip install replicate
Authenticate by setting your token in an environment variable:
Then, you can use the model from your Python code:
import replicate
replicate.run(
"replicate/hello-world:5c7d5dc6dd8bf75c1acaa8565735e7986bc5b66206b55cca93cb72c9bf15ccaa",
input={"text": "python"}
)
# "hello python"
To pass a file as an input, use a file handle or URL:
image = open("mystery.jpg", "rb")
# or...
image = "https://example.com/mystery.jpg"
replicate.run(
"replicate/resnet:dd782a3d531b61af491d1026434392e8afb40bfb53b8af35f727e80661489767",
input={"image": image}
)
URLs are more efficient if your file is already in the cloud somewhere, or it's a large file.
If your model returns a file, it will be represented as a URL in the output. To fetch the files, you'll need to pass an Authorization: Token <paste-your-token-here>
header to securely fetch the file, as documented in the HTTP API reference. (We're working on a better Python API for fetching files.)
For more details, head to the full documentation on GitHub.
You can also run your model with the raw HTTP API. Refer to the HTTP API reference for more details.