test
Factor to scale image by
Default: 1.5
Run this model in Node.js with one line of code:
npm install replicate
REPLICATE_API_TOKEN
export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run tonywang10101/tony-test-1 using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "tonywang10101/tony-test-1:0c33f6a5b1f4a892ba1325e38e50d583d09144d6e88c45dbb14142d28d5c1a07", { input: { scale: 1.5 } } ); console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
pip install replicate
import replicate
output = replicate.run( "tonywang10101/tony-test-1:0c33f6a5b1f4a892ba1325e38e50d583d09144d6e88c45dbb14142d28d5c1a07", input={ "scale": 1.5 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "tonywang10101/tony-test-1:0c33f6a5b1f4a892ba1325e38e50d583d09144d6e88c45dbb14142d28d5c1a07", "input": { "scale": 1.5 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
{ "completed_at": "2024-03-10T05:39:59.888903Z", "created_at": "2024-03-10T05:39:53.998195Z", "data_removed": false, "error": null, "id": "ctmdqjbbn2lfqivuvklvp3w6ri", "input": { "scale": 1.5 }, "logs": null, "metrics": { "predict_time": 0.004798, "total_time": 5.890708 }, "output": "what the hell of scale 1.5", "started_at": "2024-03-10T05:39:59.884105Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/ctmdqjbbn2lfqivuvklvp3w6ri", "cancel": "https://api.replicate.com/v1/predictions/ctmdqjbbn2lfqivuvklvp3w6ri/cancel" }, "version": "0c33f6a5b1f4a892ba1325e38e50d583d09144d6e88c45dbb14142d28d5c1a07" }
This model runs on CPU hardware. We don't yet have enough runs of this model to provide performance information.
test2
This model is cold. You'll get a fast response if the model is warm and already running, and a slower response if the model is cold and starting up.