Failed to load versions. Head to the versions page to see all versions for this model.
You're looking at a specific version of this model. Jump to the model overview.
Input
Run this model in Node.js with one line of code:
npm install replicate
REPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
import Replicate from "replicate";
const replicate = new Replicate({
auth: process.env.REPLICATE_API_TOKEN,
});
Run 01-ai/yi-6b using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run(
"01-ai/yi-6b:d302e64fad6b4d85d47b3d1ed569b06107504f5717ee1ec12136987bec1e94f1",
{
input: {
top_k: 50,
top_p: 0.95,
prompt: "Question: I had 3 apples and I ate 1. How many apples do I have left?\nAnswer:",
temperature: 0.9,
max_new_tokens: 64,
prompt_template: "{prompt}",
presence_penalty: 1,
frequency_penalty: 1
}
}
);
console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
pip install replicate
REPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
import replicate
Run 01-ai/yi-6b using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run(
"01-ai/yi-6b:d302e64fad6b4d85d47b3d1ed569b06107504f5717ee1ec12136987bec1e94f1",
input={
"top_k": 50,
"top_p": 0.95,
"prompt": "Question: I had 3 apples and I ate 1. How many apples do I have left?\nAnswer:",
"temperature": 0.9,
"max_new_tokens": 64,
"prompt_template": "{prompt}",
"presence_penalty": 1,
"frequency_penalty": 1
}
)
# The 01-ai/yi-6b model can stream output as it's running.
# The predict method returns an iterator, and you can iterate over that output.
for item in output:
# https://replicate.com/01-ai/yi-6b/api#output-schema
print(item, end="")
To learn more, take a look at the guide on getting started with Python.
REPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run 01-ai/yi-6b using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \
-H "Authorization: Bearer $REPLICATE_API_TOKEN" \
-H "Content-Type: application/json" \
-H "Prefer: wait" \
-d $'{
"version": "01-ai/yi-6b:d302e64fad6b4d85d47b3d1ed569b06107504f5717ee1ec12136987bec1e94f1",
"input": {
"top_k": 50,
"top_p": 0.95,
"prompt": "Question: I had 3 apples and I ate 1. How many apples do I have left?\\nAnswer:",
"temperature": 0.9,
"max_new_tokens": 64,
"prompt_template": "{prompt}",
"presence_penalty": 1,
"frequency_penalty": 1
}
}' \
https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Add a payment method to run this model.
By signing in, you agree to our
terms of service and privacy policy
Output
{
"completed_at": "2023-11-14T20:36:48.026749Z",
"created_at": "2023-11-14T20:36:47.668615Z",
"data_removed": false,
"error": null,
"id": "pwsib73boiljkblsdl4vp6j5zi",
"input": {
"top_k": 50,
"top_p": 0.95,
"prompt": "Question: I had 3 apples and I ate 1. How many apples do I have left?\nAnswer:",
"temperature": 0.9,
"max_new_tokens": 64,
"prompt_template": "{prompt}",
"presence_penalty": 1,
"frequency_penalty": 1
},
"logs": "Generated 23 tokens in 0.25441765785217285 seconds.",
"metrics": {
"predict_time": 0.321108,
"total_time": 0.358134
},
"output": [
" After",
" eating",
" one",
" apple",
",",
" I",
" had",
" ",
"2",
" apples",
" left",
".",
"\n",
"(",
"H",
")",
" -",
" (",
"A",
")",
" =",
" A",
""
],
"started_at": "2023-11-14T20:36:47.705641Z",
"status": "succeeded",
"urls": {
"get": "https://api.replicate.com/v1/predictions/pwsib73boiljkblsdl4vp6j5zi",
"cancel": "https://api.replicate.com/v1/predictions/pwsib73boiljkblsdl4vp6j5zi/cancel"
},
"version": "d302e64fad6b4d85d47b3d1ed569b06107504f5717ee1ec12136987bec1e94f1"
}
Generated 23 tokens in 0.25441765785217285 seconds.