Table of contents
Transformers is an open-source Python library that provides a consistent interface for using language models. The library contains multiple open-source generative language models like FLAN, GPT-J, GPT Neo, LLaMA, BLOOM, and others, which have been pre-trained on large text corpora and can be fine-tuned for specific tasks with relatively small amounts of training data.
Transformers also contains models like Longformer, BERT, and RoBERTa, which are generally used for more traditional natural language processing tasks like classification, named entity recognition, and so on. The process we're walking through here will work for both kinds of models; in fact, it should work for every model on Transformers.
In this guide we'll walk you through the process of taking an existing Transformers model and pushing it to Replicate as your own public or private model with a stable API.
To follow this guide, you'll need:
docker info
in your terminal.First, create a model on Replicate at replicate.com/create. If you haven't used Replicate before, you'll need to sign in with your GitHub account. You can configure the model to be private so that only you can use it, or public so anyone can use it.
Cog is an open source tool that makes it easy to put a machine learning model in a Docker container. Run the following commands to install it and set the correct permissions:
Confirm that Cog is installed by running cog --version
:
Create a new directory and initialize a new Cog project:
This will create two files, cog.yaml
and predict.py
, which you'll use to configure your dependencies and define the inputs and outputs of your model.
The cog.yaml
file defines the CUDA and Python versions, and dependencies for the model. This file tells Cog how to package the model.
Replace the contents of the cog.yaml
file with the following:
The predict.py
file defines the inputs and outputs of the model, and the code to run the model. The language model itself is imported through the Python transformers
library.
Replace the contents of the predict.py
file with the following:
The AutoTokenizer used above should work for all Transformers models.
If you want to use a Transformers model other than Flan-T5, you'll need to specify the model
class to use. For example, if you're using a GPT-J model, you'll want to use AutoModelForCausalLM
instead of T5ForConditionalGeneration
. See the Transformers docs for more details.
Next you'll create a script that uses the transformers
library to download pretrained weights.
Create a file for the script:
Paste the following code into the script/download_weights
file:
Run the script to download the weights:
This process will take a while to run but you'll only need to run it once, as it will cache the downloaded dependencies on disk. Get up and stretch, grab yourself a snack, or use this opportunity to add metadata to the model page you created on Replicate in Step 1 by adding a title, README, GitHub repository URL, etc.
Now that you've downloaded the weights, you can run the model locally with Cog:
This will run the model locally and return output text.
Now that you've created your model, it's time to push it to Replicate.
First you'll need to authenticate:
Then push your model using the name you specified in Step 1:
Your model is now live! 🚀
You can run the model from the website by clicking the "Demo" tab on the model page, or you can use the HTTP API to run the model from your own code.
Click the "API" tab on your model page to see example code for running the model:
Now that you have your own model, see what else you can do with it!
To see what models you can use, check out the Transformers docs on Hugging Face.
If you need inspiration or guidance, jump into our Discord.