Path to an image
Model name
Default: "yolox-s"
Confidence threshold: Only detections with confidence higher are kept
Default: 0.3
NMS threshold: NMS removes redundant detections. Detections with overlap percentage (IOU) above this threshold are consider redundant to each other and only one of them will be kept
Resize image to this size
Default: 640
Return results in json format
Default: false
Run this model in Node.js with one line of code:
npm install replicate
REPLICATE_API_TOKEN
export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run syedatasneem110/object_detectionv8 using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "syedatasneem110/object_detectionv8:9d23a7fd7a2c83763d66840b5630e425dca83d1d056faa44b4ec5d2ede1b3622", { input: { nms: 0.3, conf: 0.3, tsize: 640, model_name: "yolox-s", return_json: false } } ); console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
pip install replicate
import replicate
output = replicate.run( "syedatasneem110/object_detectionv8:9d23a7fd7a2c83763d66840b5630e425dca83d1d056faa44b4ec5d2ede1b3622", input={ "nms": 0.3, "conf": 0.3, "tsize": 640, "model_name": "yolox-s", "return_json": False } ) print(output)
To learn more, take a look at the guide on getting started with Python.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "syedatasneem110/object_detectionv8:9d23a7fd7a2c83763d66840b5630e425dca83d1d056faa44b4ec5d2ede1b3622", "input": { "nms": 0.3, "conf": 0.3, "tsize": 640, "model_name": "yolox-s", "return_json": false } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
No output yet! Press "Submit" to start a prediction.
This model runs on CPU hardware. We don't yet have enough runs of this model to provide performance information.
This model doesn't have a readme.
This model is cold. You'll get a fast response if the model is warm and already running, and a slower response if the model is cold and starting up.
Choose a file from your machine
Hint: you can also drag files onto the input