Build a website with Next.jsBuild a Discord bot with PythonBuild an app with SwiftUIPush your own modelPush a Diffusers modelPush a Transformers modelPush a model using GitHub ActionsDeploy a custom modelGet a GPU machine
Home / Reference

HTTP API


Authentication

All API requests must include a valid API token in the Authorization request header. The token must be prefixed by "Bearer", followed by a space and the token value. Example: Authorization: Bearer r8_Hw*********************************** Find your tokens at https://replicate.com/account/api-tokens

Get the authenticated account

Endpoint

GET https://api.replicate.com/v1/account

Description

Returns information about the user or organization associated with the provided API token.

Example cURL request:

curl -s \
  -H "Authorization: Bearer $REPLICATE_API_TOKEN" \
  https://api.replicate.com/v1/account

The response will be a JSON object describing the account:

{
  "type": "organization",
  "username": "acme",
  "name": "Acme Corp, Inc.",
  "github_url": "https://github.com/acme",
}

List collections of models

Endpoint

GET https://api.replicate.com/v1/collections

Description

Example cURL request:

curl -s \
  -H "Authorization: Bearer $REPLICATE_API_TOKEN" \
  https://api.replicate.com/v1/collections

The response will be a paginated JSON list of collection objects:

{
  "next": "null",
  "previous": null,
  "results": [
    {
      "name": "Super resolution",
      "slug": "super-resolution",
      "description": "Upscaling models that create high-quality images from low-quality images."
    }
  ]
}

Get a collection of models

Endpoint

GET https://api.replicate.com/v1/collections/{collection_slug}

Description

Example cURL request:

curl -s \
  -H "Authorization: Bearer $REPLICATE_API_TOKEN" \
  https://api.replicate.com/v1/collections/super-resolution

The response will be a collection object with a nested list of the models in that collection:

{
  "name": "Super resolution",
  "slug": "super-resolution",
  "description": "Upscaling models that create high-quality images from low-quality images.",
  "models": [...]
}

URL Parameters

List deployments

Endpoint

GET https://api.replicate.com/v1/deployments

Description

Get a list of deployments associated with the current account, including the latest release configuration for each deployment.

Example cURL request:

curl -s \
  -H "Authorization: Bearer $REPLICATE_API_TOKEN" \
  https://api.replicate.com/v1/deployments

The response will be a paginated JSON array of deployment objects, sorted with the most recent deployment first:

{
  "next": "http://api.replicate.com/v1/deployments?cursor=cD0yMDIzLTA2LTA2KzIzJTNBNDAlM0EwOC45NjMwMDAlMkIwMCUzQTAw",
  "previous": null,
  "results": [
    {
      "owner": "replicate",
      "name": "my-app-image-generator",
      "current_release": {
        "number": 1,
        "model": "stability-ai/sdxl",
        "version": "da77bc59ee60423279fd632efb4795ab731d9e3ca9705ef3341091fb989b7eaf",
        "created_at": "2024-02-15T16:32:57.018467Z",
        "created_by": {
          "type": "organization",
          "username": "acme",
          "name": "Acme Corp, Inc.",
          "github_url": "https://github.com/acme",
        },
        "configuration": {
          "hardware": "gpu-t4",
          "min_instances": 1,
          "max_instances": 5
        }
      }
    }
  ]
}

Create a deployment

Endpoint

POST https://api.replicate.com/v1/deployments

Description

Create a new deployment:

Example cURL request:

curl -s \
  -X POST \
  -H "Authorization: Bearer $REPLICATE_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
        "name": "my-app-image-generator",
        "model": "stability-ai/sdxl",
        "version": "da77bc59ee60423279fd632efb4795ab731d9e3ca9705ef3341091fb989b7eaf",
        "hardware": "gpu-t4",
        "min_instances": 0,
        "max_instances": 3
      }' \
  https://api.replicate.com/v1/deployments

The response will be a JSON object describing the deployment:

{
  "owner": "acme",
  "name": "my-app-image-generator",
  "current_release": {
    "number": 1,
    "model": "stability-ai/sdxl",
    "version": "da77bc59ee60423279fd632efb4795ab731d9e3ca9705ef3341091fb989b7eaf",
    "created_at": "2024-02-15T16:32:57.018467Z",
    "created_by": {
      "type": "organization",
      "username": "acme",
      "name": "Acme Corp, Inc.",
      "github_url": "https://github.com/acme",
    },
    "configuration": {
      "hardware": "gpu-t4",
      "min_instances": 1,
      "max_instances": 5
    }
  }
}

Request Body

  • hardwarestringRequired
    The SKU for the hardware used to run the model. Possible values can be retrieved from the hardware.list endpoint.
  • max_instancesintegerRequired
    The maximum number of instances for scaling.
  • min_instancesintegerRequired
    The minimum number of instances for scaling.
  • modelstringRequired
    The full name of the model that you want to deploy e.g. stability-ai/sdxl.
  • namestringRequired
    The name of the deployment.
  • versionstringRequired
    The 64-character string ID of the model version that you want to deploy.

Delete a deployment

Endpoint

DELETE https://api.replicate.com/v1/deployments/{deployment_owner}/{deployment_name}

Description

Delete a deployment

Deployment deletion has some restrictions:

  • You can only delete deployments that have been offline and unused for at least 15 minutes.

Example cURL request:

curl -s -X DELETE \
  -H "Authorization: Bearer $REPLICATE_API_TOKEN" \
  https://api.replicate.com/v1/deployments/acme/my-app-image-generator

The response will be an empty 204, indicating the deployment has been deleted.

URL Parameters

Get a deployment

Endpoint

GET https://api.replicate.com/v1/deployments/{deployment_owner}/{deployment_name}

Description

Get information about a deployment by name including the current release.

Example cURL request:

curl -s \
  -H "Authorization: Bearer $REPLICATE_API_TOKEN" \
  https://api.replicate.com/v1/deployments/replicate/my-app-image-generator

The response will be a JSON object describing the deployment:

{
  "owner": "acme",
  "name": "my-app-image-generator",
  "current_release": {
    "number": 1,
    "model": "stability-ai/sdxl",
    "version": "da77bc59ee60423279fd632efb4795ab731d9e3ca9705ef3341091fb989b7eaf",
    "created_at": "2024-02-15T16:32:57.018467Z",
    "created_by": {
      "type": "organization",
      "username": "acme",
      "name": "Acme Corp, Inc.",
      "github_url": "https://github.com/acme",
    },
    "configuration": {
      "hardware": "gpu-t4",
      "min_instances": 1,
      "max_instances": 5
    }
  }
}

URL Parameters

Update a deployment

Endpoint

PATCH https://api.replicate.com/v1/deployments/{deployment_owner}/{deployment_name}

Description

Update properties of an existing deployment, including hardware, min/max instances, and the deployment's underlying model version.

Example cURL request:

curl -s \
  -X PATCH \
  -H "Authorization: Bearer $REPLICATE_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{"min_instances": 3, "max_instances": 10}' \
  https://api.replicate.com/v1/deployments/acme/my-app-image-generator

The response will be a JSON object describing the deployment:

{
  "owner": "acme",
  "name": "my-app-image-generator",
  "current_release": {
    "number": 2,
    "model": "stability-ai/sdxl",
    "version": "da77bc59ee60423279fd632efb4795ab731d9e3ca9705ef3341091fb989b7eaf",
    "created_at": "2024-02-15T16:32:57.018467Z",
    "created_by": {
      "type": "organization",
      "username": "acme",
      "name": "Acme Corp, Inc.",
      "github_url": "https://github.com/acme",
    },
    "configuration": {
      "hardware": "gpu-t4",
      "min_instances": 3,
      "max_instances": 10
    }
  }
}

Updating any deployment properties will increment the number field of the current_release.

URL Parameters

Request Body

  • hardwarestring
    The SKU for the hardware used to run the model. Possible values can be retrieved from the hardware.list endpoint.
  • The maximum number of instances for scaling.
  • The minimum number of instances for scaling.
  • versionstring
    The ID of the model version that you want to deploy

Create a prediction using a deployment

Endpoint

POST https://api.replicate.com/v1/deployments/{deployment_owner}/{deployment_name}/predictions

Description

Create a prediction for the deployment and inputs you provide.

Example cURL request:

curl -s -X POST -H 'Prefer: wait' \
  -d '{"input": {"prompt": "A photo of a bear riding a bicycle over the moon"}}' \
  -H "Authorization: Bearer $REPLICATE_API_TOKEN" \
  -H 'Content-Type: application/json' \
  https://api.replicate.com/v1/deployments/acme/my-app-image-generator/predictions

The request will wait up to 60 seconds for the model to run. If this time is exceeded the prediction will be returned in a "starting" state and need to be retrieved using the predictions.get endpiont.

For a complete overview of the deployments.predictions.create API check out our documentation on creating a prediction which covers a variety of use cases.

URL Parameters

Headers

Request Body

  • inputobjectRequired

    The model's input as a JSON object. The input schema depends on what model you are running. To see the available inputs, click the "API" tab on the model you are running or get the model version and look at its openapi_schema property. For example, stability-ai/sdxl takes prompt as an input.

    Files should be passed as HTTP URLs or data URLs.

    Use an HTTP URL when:

    • you have a large file > 256kb
    • you want to be able to use the file multiple times
    • you want your prediction metadata to be associable with your input files

    Use a data URL when:

    • you have a small file <= 256kb
    • you don't want to upload and host the file somewhere
    • you don't need to use the file again (Replicate will not store it)
  • streamboolean

    This field is deprecated.

    Request a URL to receive streaming output using server-sent events (SSE).

    This field is no longer needed as the returned prediction will always have a stream entry in its url property if the model supports streaming.

  • webhookstring

    An HTTPS URL for receiving a webhook when the prediction has new output. The webhook will be a POST request where the request body is the same as the response body of the get prediction operation. If there are network problems, we will retry the webhook a few times, so make sure it can be safely called more than once. Replicate will not follow redirects when sending webhook requests to your service, so be sure to specify a URL that will resolve without redirecting.

  • By default, we will send requests to your webhook URL whenever there are new outputs or the prediction has finished. You can change which events trigger webhook requests by specifying webhook_events_filter in the prediction request:

    • start: immediately on prediction start
    • output: each time a prediction generates an output (note that predictions can generate multiple outputs)
    • logs: each time log output is generated by a prediction
    • completed: when the prediction reaches a terminal state (succeeded/canceled/failed)

    For example, if you only wanted requests to be sent at the start and end of the prediction, you would provide:

    {
      "input": {
        "text": "Alice"
      },
      "webhook": "https://example.com/my-webhook",
      "webhook_events_filter": ["start", "completed"]
    }

    Requests for event types output and logs will be sent at most once every 500ms. If you request start and completed webhooks, then they'll always be sent regardless of throttling.

List available hardware for models

Endpoint

GET https://api.replicate.com/v1/hardware

Description

Example cURL request:

curl -s \
  -H "Authorization: Bearer $REPLICATE_API_TOKEN" \
  https://api.replicate.com/v1/hardware

The response will be a JSON array of hardware objects:

[
    {"name": "CPU", "sku": "cpu"},
    {"name": "Nvidia T4 GPU", "sku": "gpu-t4"},
    {"name": "Nvidia A40 GPU", "sku": "gpu-a40-small"},
    {"name": "Nvidia A40 (Large) GPU", "sku": "gpu-a40-large"},
]

List public models

Endpoint

GET https://api.replicate.com/v1/models

Description

Get a paginated list of public models.

Example cURL request:

curl -s \
  -H "Authorization: Bearer $REPLICATE_API_TOKEN" \
  https://api.replicate.com/v1/models

The response will be a paginated JSON array of model objects:

{
  "next": null,
  "previous": null,
  "results": [
    {
      "url": "https://replicate.com/acme/hello-world",
      "owner": "acme",
      "name": "hello-world",
      "description": "A tiny model that says hello",
      "visibility": "public",
      "github_url": "https://github.com/replicate/cog-examples",
      "paper_url": null,
      "license_url": null,
      "run_count": 5681081,
      "cover_image_url": "...",
      "default_example": {...},
      "latest_version": {...}
    }
  ]
}

The cover_image_url string is an HTTPS URL for an image file. This can be:

  • An image uploaded by the model author.
  • The output file of the example prediction, if the model author has not set a cover image.
  • The input file of the example prediction, if the model author has not set a cover image and the example prediction has no output file.
  • A generic fallback image.

Create a model

Endpoint

POST https://api.replicate.com/v1/models

Description

Create a model.

Example cURL request:

curl -s -X POST \
  -H "Authorization: Bearer $REPLICATE_API_TOKEN" \
  -H 'Content-Type: application/json' \
  -d '{"owner": "alice", "name": "my-model", "description": "An example model", "visibility": "public", "hardware": "cpu"}' \
  https://api.replicate.com/v1/models

The response will be a model object in the following format:

{
  "url": "https://replicate.com/alice/my-model",
  "owner": "alice",
  "name": "my-model",
  "description": "An example model",
  "visibility": "public",
  "github_url": null,
  "paper_url": null,
  "license_url": null,
  "run_count": 0,
  "cover_image_url": null,
  "default_example": null,
  "latest_version": null,
}

Note that there is a limit of 1,000 models per account. For most purposes, we recommend using a single model and pushing new versions of the model as you make changes to it.

Request Body

  • hardwarestringRequired
    The SKU for the hardware used to run the model. Possible values can be retrieved from the hardware.list endpoint.
  • namestringRequired
    The name of the model. This must be unique among all models owned by the user or organization.
  • ownerstringRequired
    The name of the user or organization that will own the model. This must be the same as the user or organization that is making the API request. In other words, the API token used in the request must belong to this user or organization.
  • visibilitystringRequired
    Whether the model should be public or private. A public model can be viewed and run by anyone, whereas a private model can be viewed and run only by the user or organization members that own the model.
  • A URL for the model's cover image. This should be an image file.
  • A description of the model.
  • A URL for the model's source code on GitHub.
  • A URL for the model's license.
  • paper_urlstring
    A URL for the model's paper.

Search public models

Endpoint

QUERY https://api.replicate.com/v1/models

Description

Get a list of public models matching a search query.

Example cURL request:

curl -s -X QUERY \
  -H "Authorization: Bearer $REPLICATE_API_TOKEN" \
  -H "Content-Type: text/plain" \
  -d "hello" \
  https://api.replicate.com/v1/models

The response will be a paginated JSON object containing an array of model objects:

{
  "next": null,
  "previous": null,
  "results": [
    {
      "url": "https://replicate.com/acme/hello-world",
      "owner": "acme",
      "name": "hello-world",
      "description": "A tiny model that says hello",
      "visibility": "public",
      "github_url": "https://github.com/replicate/cog-examples",
      "paper_url": null,
      "license_url": null,
      "run_count": 5681081,
      "cover_image_url": "...",
      "default_example": {...},
      "latest_version": {...}
    }
  ]
}

The cover_image_url string is an HTTPS URL for an image file. This can be:

  • An image uploaded by the model author.
  • The output file of the example prediction, if the model author has not set a cover image.
  • The input file of the example prediction, if the model author has not set a cover image and the example prediction has no output file.
  • A generic fallback image.

Delete a model

Endpoint

DELETE https://api.replicate.com/v1/models/{model_owner}/{model_name}

Description

Delete a model

Model deletion has some restrictions:

  • You can only delete models you own.
  • You can only delete private models.
  • You can only delete models that have no versions associated with them. Currently you'll need to delete the model's versions before you can delete the model itself.

Example cURL request:

curl -s -X DELETE \
  -H "Authorization: Bearer $REPLICATE_API_TOKEN" \
  https://api.replicate.com/v1/models/replicate/hello-world

The response will be an empty 204, indicating the model has been deleted.

URL Parameters

  • model_ownerstringRequired
    The name of the user or organization that owns the model.
  • model_namestringRequired
    The name of the model.

Get a model

Endpoint

GET https://api.replicate.com/v1/models/{model_owner}/{model_name}

Description

Example cURL request:

curl -s \
  -H "Authorization: Bearer $REPLICATE_API_TOKEN" \
  https://api.replicate.com/v1/models/replicate/hello-world

The response will be a model object in the following format:

{
  "url": "https://replicate.com/replicate/hello-world",
  "owner": "replicate",
  "name": "hello-world",
  "description": "A tiny model that says hello",
  "visibility": "public",
  "github_url": "https://github.com/replicate/cog-examples",
  "paper_url": null,
  "license_url": null,
  "run_count": 5681081,
  "cover_image_url": "...",
  "default_example": {...},
  "latest_version": {...},
}

The cover_image_url string is an HTTPS URL for an image file. This can be:

  • An image uploaded by the model author.
  • The output file of the example prediction, if the model author has not set a cover image.
  • The input file of the example prediction, if the model author has not set a cover image and the example prediction has no output file.
  • A generic fallback image.

The default_example object is a prediction created with this model.

The latest_version object is the model's most recently pushed version.

URL Parameters

  • model_ownerstringRequired
    The name of the user or organization that owns the model.
  • model_namestringRequired
    The name of the model.

Create a prediction using an official model

Endpoint

POST https://api.replicate.com/v1/models/{model_owner}/{model_name}/predictions

Description

Create a prediction for the deployment and inputs you provide.

Example cURL request:

curl -s -X POST -H 'Prefer: wait' \
  -d '{"input": {"prompt": "Write a short poem about the weather."}}' \
  -H "Authorization: Bearer $REPLICATE_API_TOKEN" \
  -H 'Content-Type: application/json' \
  https://api.replicate.com/v1/models/meta/meta-llama-3-70b-instruct/predictions

The request will wait up to 60 seconds for the model to run. If this time is exceeded the prediction will be returned in a "starting" state and need to be retrieved using the predictions.get endpiont.

For a complete overview of the deployments.predictions.create API check out our documentation on creating a prediction which covers a variety of use cases.

URL Parameters

  • model_ownerstringRequired
    The name of the user or organization that owns the model.
  • model_namestringRequired
    The name of the model.

Headers

Request Body

  • inputobjectRequired

    The model's input as a JSON object. The input schema depends on what model you are running. To see the available inputs, click the "API" tab on the model you are running or get the model version and look at its openapi_schema property. For example, stability-ai/sdxl takes prompt as an input.

    Files should be passed as HTTP URLs or data URLs.

    Use an HTTP URL when:

    • you have a large file > 256kb
    • you want to be able to use the file multiple times
    • you want your prediction metadata to be associable with your input files

    Use a data URL when:

    • you have a small file <= 256kb
    • you don't want to upload and host the file somewhere
    • you don't need to use the file again (Replicate will not store it)
  • streamboolean

    This field is deprecated.

    Request a URL to receive streaming output using server-sent events (SSE).

    This field is no longer needed as the returned prediction will always have a stream entry in its url property if the model supports streaming.

  • webhookstring

    An HTTPS URL for receiving a webhook when the prediction has new output. The webhook will be a POST request where the request body is the same as the response body of the get prediction operation. If there are network problems, we will retry the webhook a few times, so make sure it can be safely called more than once. Replicate will not follow redirects when sending webhook requests to your service, so be sure to specify a URL that will resolve without redirecting.

  • By default, we will send requests to your webhook URL whenever there are new outputs or the prediction has finished. You can change which events trigger webhook requests by specifying webhook_events_filter in the prediction request:

    • start: immediately on prediction start
    • output: each time a prediction generates an output (note that predictions can generate multiple outputs)
    • logs: each time log output is generated by a prediction
    • completed: when the prediction reaches a terminal state (succeeded/canceled/failed)

    For example, if you only wanted requests to be sent at the start and end of the prediction, you would provide:

    {
      "input": {
        "text": "Alice"
      },
      "webhook": "https://example.com/my-webhook",
      "webhook_events_filter": ["start", "completed"]
    }

    Requests for event types output and logs will be sent at most once every 500ms. If you request start and completed webhooks, then they'll always be sent regardless of throttling.

List model versions

Endpoint

GET https://api.replicate.com/v1/models/{model_owner}/{model_name}/versions

Description

Example cURL request:

curl -s \
  -H "Authorization: Bearer $REPLICATE_API_TOKEN" \
  https://api.replicate.com/v1/models/replicate/hello-world/versions

The response will be a JSON array of model version objects, sorted with the most recent version first:

{
  "next": null,
  "previous": null,
  "results": [
    {
      "id": "5c7d5dc6dd8bf75c1acaa8565735e7986bc5b66206b55cca93cb72c9bf15ccaa",
      "created_at": "2022-04-26T19:29:04.418669Z",
      "cog_version": "0.3.0",
      "openapi_schema": {...}
    }
  ]
}

URL Parameters

  • model_ownerstringRequired
    The name of the user or organization that owns the model.
  • model_namestringRequired
    The name of the model.

Delete a model version

Endpoint

DELETE https://api.replicate.com/v1/models/{model_owner}/{model_name}/versions/{version_id}

Description

Delete a model version and all associated predictions, including all output files.

Model version deletion has some restrictions:

  • You can only delete versions from models you own.
  • You can only delete versions from private models.
  • You cannot delete a version if someone other than you has run predictions with it.
  • You cannot delete a version if it is being used as the base model for a fine tune/training.
  • You cannot delete a version if it has an associated deployment.
  • You cannot delete a version if another model version is overridden to use it.

Example cURL request:

curl -s -X DELETE \
  -H "Authorization: Bearer $REPLICATE_API_TOKEN" \
  https://api.replicate.com/v1/models/replicate/hello-world/versions/5c7d5dc6dd8bf75c1acaa8565735e7986bc5b66206b55cca93cb72c9bf15ccaa

The response will be an empty 202, indicating the deletion request has been accepted. It might take a few minutes to be processed.

URL Parameters

  • model_ownerstringRequired
    The name of the user or organization that owns the model.
  • model_namestringRequired
    The name of the model.
  • version_idstringRequired
    The ID of the version.

Get a model version

Endpoint

GET https://api.replicate.com/v1/models/{model_owner}/{model_name}/versions/{version_id}

Description

Example cURL request:

curl -s \
  -H "Authorization: Bearer $REPLICATE_API_TOKEN" \
  https://api.replicate.com/v1/models/replicate/hello-world/versions/5c7d5dc6dd8bf75c1acaa8565735e7986bc5b66206b55cca93cb72c9bf15ccaa

The response will be the version object:

{
  "id": "5c7d5dc6dd8bf75c1acaa8565735e7986bc5b66206b55cca93cb72c9bf15ccaa",
  "created_at": "2022-04-26T19:29:04.418669Z",
  "cog_version": "0.3.0",
  "openapi_schema": {...}
}

Every model describes its inputs and outputs with OpenAPI Schema Objects in the openapi_schema property.

The openapi_schema.components.schemas.Input property for the replicate/hello-world model looks like this:

{
  "type": "object",
  "title": "Input",
  "required": [
    "text"
  ],
  "properties": {
    "text": {
      "x-order": 0,
      "type": "string",
      "title": "Text",
      "description": "Text to prefix with 'hello '"
    }
  }
}

The openapi_schema.components.schemas.Output property for the replicate/hello-world model looks like this:

{
  "type": "string",
  "title": "Output"
}

For more details, see the docs on Cog's supported input and output types

URL Parameters

  • model_ownerstringRequired
    The name of the user or organization that owns the model.
  • model_namestringRequired
    The name of the model.
  • version_idstringRequired
    The ID of the version.

Create a training

Endpoint

POST https://api.replicate.com/v1/models/{model_owner}/{model_name}/versions/{version_id}/trainings

Description

Start a new training of the model version you specify.

Example request body:

{
  "destination": "{new_owner}/{new_name}",
  "input": {
    "train_data": "https://example.com/my-input-images.zip",
  },
  "webhook": "https://example.com/my-webhook",
}

Example cURL request:

curl -s -X POST \
  -d '{"destination": "{new_owner}/{new_name}", "input": {"input_images": "https://example.com/my-input-images.zip"}}' \
  -H "Authorization: Bearer $REPLICATE_API_TOKEN" \
  -H 'Content-Type: application/json' \
  https://api.replicate.com/v1/models/stability-ai/sdxl/versions/da77bc59ee60423279fd632efb4795ab731d9e3ca9705ef3341091fb989b7eaf/trainings

The response will be the training object:

{
  "id": "zz4ibbonubfz7carwiefibzgga",
  "model": "stability-ai/sdxl",
  "version": "da77bc59ee60423279fd632efb4795ab731d9e3ca9705ef3341091fb989b7eaf",
  "input": {
    "input_images": "https://example.com/my-input-images.zip"
  },
  "logs": "",
  "error": null,
  "status": "starting",
  "created_at": "2023-09-08T16:32:56.990893084Z",
  "urls": {
    "cancel": "https://api.replicate.com/v1/predictions/zz4ibbonubfz7carwiefibzgga/cancel",
    "get": "https://api.replicate.com/v1/predictions/zz4ibbonubfz7carwiefibzgga"
  }
}

As models can take several minutes or more to train, the result will not be available immediately. To get the final result of the training you should either provide a webhook HTTPS URL for us to call when the results are ready, or poll the get a training endpoint until it has finished.

When a training completes, it creates a new version of the model at the specified destination.

To find some models to train on, check out the trainable language models collection.

URL Parameters

  • model_ownerstringRequired
    The name of the user or organization that owns the model.
  • model_namestringRequired
    The name of the model.
  • version_idstringRequired
    The ID of the version.

Request Body

  • destinationstringRequired

    A string representing the desired model to push to in the format {destination_model_owner}/{destination_model_name}. This should be an existing model owned by the user or organization making the API request. If the destination is invalid, the server will return an appropriate 4XX response.

  • inputobjectRequired

    An object containing inputs to the Cog model's train() function.

  • webhookstring
    An HTTPS URL for receiving a webhook when the training completes. The webhook will be a POST request where the request body is the same as the response body of the get training operation. If there are network problems, we will retry the webhook a few times, so make sure it can be safely called more than once. Replicate will not follow redirects when sending webhook requests to your service, so be sure to specify a URL that will resolve without redirecting.
  • By default, we will send requests to your webhook URL whenever there are new outputs or the training has finished. You can change which events trigger webhook requests by specifying webhook_events_filter in the training request:

    • start: immediately on training start
    • output: each time a training generates an output (note that trainings can generate multiple outputs)
    • logs: each time log output is generated by a training
    • completed: when the training reaches a terminal state (succeeded/canceled/failed)

    For example, if you only wanted requests to be sent at the start and end of the training, you would provide:

    {
      "destination": "my-organization/my-model",
      "input": {
        "text": "Alice"
      },
      "webhook": "https://example.com/my-webhook",
      "webhook_events_filter": ["start", "completed"]
    }

    Requests for event types output and logs will be sent at most once every 500ms. If you request start and completed webhooks, then they'll always be sent regardless of throttling.

List predictions

Endpoint

GET https://api.replicate.com/v1/predictions

Description

Get a paginated list of predictions that you've created. This will include predictions created from the API and the website. It will return 100 records per page.

Example cURL request:

curl -s \
  -H "Authorization: Bearer $REPLICATE_API_TOKEN" \
  https://api.replicate.com/v1/predictions

The response will be a paginated JSON array of prediction objects, sorted with the most recent prediction first:

{
  "next": null,
  "previous": null,
  "results": [
    {
      "completed_at": "2023-09-08T16:19:34.791859Z",
      "created_at": "2023-09-08T16:19:34.907244Z",
      "data_removed": false,
      "error": null,
      "id": "gm3qorzdhgbfurvjtvhg6dckhu",
      "input": {
        "text": "Alice"
      },
      "metrics": {
        "predict_time": 0.012683
      },
      "output": "hello Alice",
      "started_at": "2023-09-08T16:19:34.779176Z",
      "source": "api",
      "status": "succeeded",
      "urls": {
        "get": "https://api.replicate.com/v1/predictions/gm3qorzdhgbfurvjtvhg6dckhu",
        "cancel": "https://api.replicate.com/v1/predictions/gm3qorzdhgbfurvjtvhg6dckhu/cancel"
      },
      "model": "replicate/hello-world",
      "version": "5c7d5dc6dd8bf75c1acaa8565735e7986bc5b66206b55cca93cb72c9bf15ccaa",
    }
  ]
}

id will be the unique ID of the prediction.

source will indicate how the prediction was created. Possible values are web or api.

status will be the status of the prediction. Refer to get a single prediction for possible values.

urls will be a convenience object that can be used to construct new API requests for the given prediction. If the requested model version supports streaming, this will have a stream entry with an HTTPS URL that you can use to construct an EventSource.

model will be the model identifier string in the format of {model_owner}/{model_name}.

version will be the unique ID of model version used to create the prediction.

data_removed will be true if the input and output data has been deleted.

Create a prediction

Endpoint

POST https://api.replicate.com/v1/predictions

Description

Create a prediction for the model version and inputs you provide.

Example cURL request:

curl -s -X POST -H 'Prefer: wait' \
  -d '{"version": "5c7d5dc6dd8bf75c1acaa8565735e7986bc5b66206b55cca93cb72c9bf15ccaa", "input": {"text": "Alice"}}' \
  -H "Authorization: Bearer $REPLICATE_API_TOKEN" \
  -H 'Content-Type: application/json' \
  https://api.replicate.com/v1/predictions

The request will wait up to 60 seconds for the model to run. If this time is exceeded the prediction will be returned in a "starting" state and need to be retrieved using the predictions.get endpiont.

For a complete overview of the predictions.create API check out our documentation on creating a prediction which covers a variety of use cases.

Headers

Request Body

  • inputobjectRequired

    The model's input as a JSON object. The input schema depends on what model you are running. To see the available inputs, click the "API" tab on the model you are running or get the model version and look at its openapi_schema property. For example, stability-ai/sdxl takes prompt as an input.

    Files should be passed as HTTP URLs or data URLs.

    Use an HTTP URL when:

    • you have a large file > 256kb
    • you want to be able to use the file multiple times
    • you want your prediction metadata to be associable with your input files

    Use a data URL when:

    • you have a small file <= 256kb
    • you don't want to upload and host the file somewhere
    • you don't need to use the file again (Replicate will not store it)
  • versionstringRequired
    The ID of the model version that you want to run.
  • streamboolean

    This field is deprecated.

    Request a URL to receive streaming output using server-sent events (SSE).

    This field is no longer needed as the returned prediction will always have a stream entry in its url property if the model supports streaming.

  • webhookstring

    An HTTPS URL for receiving a webhook when the prediction has new output. The webhook will be a POST request where the request body is the same as the response body of the get prediction operation. If there are network problems, we will retry the webhook a few times, so make sure it can be safely called more than once. Replicate will not follow redirects when sending webhook requests to your service, so be sure to specify a URL that will resolve without redirecting.

  • By default, we will send requests to your webhook URL whenever there are new outputs or the prediction has finished. You can change which events trigger webhook requests by specifying webhook_events_filter in the prediction request:

    • start: immediately on prediction start
    • output: each time a prediction generates an output (note that predictions can generate multiple outputs)
    • logs: each time log output is generated by a prediction
    • completed: when the prediction reaches a terminal state (succeeded/canceled/failed)

    For example, if you only wanted requests to be sent at the start and end of the prediction, you would provide:

    {
      "version": "5c7d5dc6dd8bf75c1acaa8565735e7986bc5b66206b55cca93cb72c9bf15ccaa",
      "input": {
        "text": "Alice"
      },
      "webhook": "https://example.com/my-webhook",
      "webhook_events_filter": ["start", "completed"]
    }

    Requests for event types output and logs will be sent at most once every 500ms. If you request start and completed webhooks, then they'll always be sent regardless of throttling.

Get a prediction

Endpoint

GET https://api.replicate.com/v1/predictions/{prediction_id}

Description

Get the current state of a prediction.

Example cURL request:

curl -s \
  -H "Authorization: Bearer $REPLICATE_API_TOKEN" \
  https://api.replicate.com/v1/predictions/gm3qorzdhgbfurvjtvhg6dckhu

The response will be the prediction object:

{
  "id": "gm3qorzdhgbfurvjtvhg6dckhu",
  "model": "replicate/hello-world",
  "version": "5c7d5dc6dd8bf75c1acaa8565735e7986bc5b66206b55cca93cb72c9bf15ccaa",
  "input": {
    "text": "Alice"
  },
  "logs": "",
  "output": "hello Alice",
  "error": null,
  "status": "succeeded",
  "created_at": "2023-09-08T16:19:34.765994Z",
  "data_removed": false,
  "started_at": "2023-09-08T16:19:34.779176Z",
  "completed_at": "2023-09-08T16:19:34.791859Z",
  "metrics": {
    "predict_time": 0.012683
  },
  "urls": {
    "cancel": "https://api.replicate.com/v1/predictions/gm3qorzdhgbfurvjtvhg6dckhu/cancel",
    "get": "https://api.replicate.com/v1/predictions/gm3qorzdhgbfurvjtvhg6dckhu"
  }
}

status will be one of:

  • starting: the prediction is starting up. If this status lasts longer than a few seconds, then it's typically because a new worker is being started to run the prediction.
  • processing: the predict() method of the model is currently running.
  • succeeded: the prediction completed successfully.
  • failed: the prediction encountered an error during processing.
  • canceled: the prediction was canceled by its creator.

In the case of success, output will be an object containing the output of the model. Any files will be represented as HTTPS URLs. You'll need to pass the Authorization header to request them.

In the case of failure, error will contain the error encountered during the prediction.

Terminated predictions (with a status of succeeded, failed, or canceled) will include a metrics object with a predict_time property showing the amount of CPU or GPU time, in seconds, that the prediction used while running. It won't include time waiting for the prediction to start.

All input parameters, output values, and logs are automatically removed after an hour, by default, for predictions created through the API.

You must save a copy of any data or files in the output if you'd like to continue using them. The output key will still be present, but it's value will be null after the output has been removed.

Output files are served by replicate.delivery and its subdomains. If you use an allow list of external domains for your assets, add replicate.delivery and *.replicate.delivery to it.

URL Parameters

Cancel a prediction

Endpoint

POST https://api.replicate.com/v1/predictions/{prediction_id}/cancel

URL Parameters

List trainings

Endpoint

GET https://api.replicate.com/v1/trainings

Description

Get a paginated list of trainings that you've created. This will include trainings created from the API and the website. It will return 100 records per page.

Example cURL request:

curl -s \
  -H "Authorization: Bearer $REPLICATE_API_TOKEN" \
  https://api.replicate.com/v1/trainings

The response will be a paginated JSON array of training objects, sorted with the most recent training first:

{
  "next": null,
  "previous": null,
  "results": [
    {
      "completed_at": "2023-09-08T16:41:19.826523Z",
      "created_at": "2023-09-08T16:32:57.018467Z",
      "error": null,
      "id": "zz4ibbonubfz7carwiefibzgga",
      "input": {
        "input_images": "https://example.com/my-input-images.zip"
      },
      "metrics": {
        "predict_time": 502.713876
      },
      "output": {
        "version": "...",
        "weights": "..."
      },
      "started_at": "2023-09-08T16:32:57.112647Z",
      "source": "api",
      "status": "succeeded",
      "urls": {
        "get": "https://api.replicate.com/v1/trainings/zz4ibbonubfz7carwiefibzgga",
        "cancel": "https://api.replicate.com/v1/trainings/zz4ibbonubfz7carwiefibzgga/cancel"
      },
      "model": "stability-ai/sdxl",
      "version": "da77bc59ee60423279fd632efb4795ab731d9e3ca9705ef3341091fb989b7eaf",
    }
  ]
}

id will be the unique ID of the training.

source will indicate how the training was created. Possible values are web or api.

status will be the status of the training. Refer to get a single training for possible values.

urls will be a convenience object that can be used to construct new API requests for the given training.

version will be the unique ID of model version used to create the training.

Get a training

Endpoint

GET https://api.replicate.com/v1/trainings/{training_id}

Description

Get the current state of a training.

Example cURL request:

curl -s \
  -H "Authorization: Bearer $REPLICATE_API_TOKEN" \
  https://api.replicate.com/v1/trainings/zz4ibbonubfz7carwiefibzgga

The response will be the training object:

{
  "completed_at": "2023-09-08T16:41:19.826523Z",
  "created_at": "2023-09-08T16:32:57.018467Z",
  "error": null,
  "id": "zz4ibbonubfz7carwiefibzgga",
  "input": {
    "input_images": "https://example.com/my-input-images.zip"
  },
  "logs": "...",
  "metrics": {
    "predict_time": 502.713876
  },
  "output": {
    "version": "...",
    "weights": "..."
  },
  "started_at": "2023-09-08T16:32:57.112647Z",
  "status": "succeeded",
  "urls": {
    "get": "https://api.replicate.com/v1/trainings/zz4ibbonubfz7carwiefibzgga",
    "cancel": "https://api.replicate.com/v1/trainings/zz4ibbonubfz7carwiefibzgga/cancel"
  },
  "model": "stability-ai/sdxl",
  "version": "da77bc59ee60423279fd632efb4795ab731d9e3ca9705ef3341091fb989b7eaf",
}

status will be one of:

  • starting: the training is starting up. If this status lasts longer than a few seconds, then it's typically because a new worker is being started to run the training.
  • processing: the train() method of the model is currently running.
  • succeeded: the training completed successfully.
  • failed: the training encountered an error during processing.
  • canceled: the training was canceled by its creator.

In the case of success, output will be an object containing the output of the model. Any files will be represented as HTTPS URLs. You'll need to pass the Authorization header to request them.

In the case of failure, error will contain the error encountered during the training.

Terminated trainings (with a status of succeeded, failed, or canceled) will include a metrics object with a predict_time property showing the amount of CPU or GPU time, in seconds, that the training used while running. It won't include time waiting for the training to start.

URL Parameters

  • training_idstringRequired
    The ID of the training to get.

Cancel a training

Endpoint

POST https://api.replicate.com/v1/trainings/{training_id}/cancel

URL Parameters

  • training_idstringRequired
    The ID of the training you want to cancel.

Get the signing secret for the default webhook

Endpoint

GET https://api.replicate.com/v1/webhooks/default/secret

Description

Get the signing secret for the default webhook endpoint. This is used to verify that webhook requests are coming from Replicate.

Example cURL request:

curl -s \
  -H "Authorization: Bearer $REPLICATE_API_TOKEN" \
  https://api.replicate.com/v1/webhooks/default/secret

The response will be a JSON object with a key property:

{
  "key": "..."
}

Rate limits

We limit the number of API requests that can be made to Replicate:

  • You can call create prediction at 600 requests per minute.
  • All other endpoints you can call at 3000 requests per minute.

If you hit a limit, you will receive a response with status 429 with a body like:

{"detail":"Request was throttled. Expected available in 1 second."}

If you want higher limits, contact us.

OpenAPI schema

Replicate's public HTTP API documentation is available as a machine-readable OpenAPI schema in JSON format.

See OpenAPI schema to learn more and download the schema.