Failed to load versions. Head to the versions page to see all versions for this model.
You're looking at a specific version of this model. Jump to the model overview.
fire /v-sekai.mediapipe-labeler:aa584fa4
Input
Run this model in Node.js with one line of code:
npm install replicate
REPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
import Replicate from "replicate";
const replicate = new Replicate({
auth: process.env.REPLICATE_API_TOKEN,
});
Run fire/v-sekai.mediapipe-labeler using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run(
"fire/v-sekai.mediapipe-labeler:aa584fa4350829ea99b34753defdfeffa0a74c76417286ed0ce01b0b02ecf78b",
{
input: {
test_mode: false,
max_people: 100,
export_train: true,
frame_sample_rate: 1
}
}
);
console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
pip install replicate
REPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
import replicate
Run fire/v-sekai.mediapipe-labeler using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run(
"fire/v-sekai.mediapipe-labeler:aa584fa4350829ea99b34753defdfeffa0a74c76417286ed0ce01b0b02ecf78b",
input={
"test_mode": False,
"max_people": 100,
"export_train": True,
"frame_sample_rate": 1
}
)
print(output)
To learn more, take a look at the guide on getting started with Python.
REPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run fire/v-sekai.mediapipe-labeler using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \
-H "Authorization: Bearer $REPLICATE_API_TOKEN" \
-H "Content-Type: application/json" \
-H "Prefer: wait" \
-d $'{
"version": "aa584fa4350829ea99b34753defdfeffa0a74c76417286ed0ce01b0b02ecf78b",
"input": {
"test_mode": false,
"max_people": 100,
"export_train": true,
"frame_sample_rate": 1
}
}' \
https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
brew install cog
If you don’t have Homebrew, there are other installation options available.
Run this to download the model and run it in your local environment:
cog predict r8.im/fire/v-sekai.mediapipe-labeler@sha256:aa584fa4350829ea99b34753defdfeffa0a74c76417286ed0ce01b0b02ecf78b \
-i 'test_mode=false' \
-i 'max_people=100' \
-i 'export_train=true' \
-i 'frame_sample_rate=1'
To learn more, take a look at the Cog documentation.
Run this to download the model and run it in your local environment:
docker run -d -p 5000:5000 r8.im/fire/v-sekai.mediapipe-labeler@sha256:aa584fa4350829ea99b34753defdfeffa0a74c76417286ed0ce01b0b02ecf78b
curl -s -X POST \ -H "Content-Type: application/json" \ -d $'{ "input": { "test_mode": false, "max_people": 100, "export_train": true, "frame_sample_rate": 1 } }' \ http://localhost:5000/predictions
To learn more, take a look at the Cog documentation.
Output
No output yet! Press "Submit" to start a prediction.