typetext
{
"max_new_tokens": 512,
"prompt": "How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
"prompt_template": "<|im_start|>system\n{system_prompt}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant: ",
"repeat_penalty": 1.1,
"system_prompt": "You are 'Neural-Beagle', an AI assistant and your purpose and drive is to assist the user with any request they have.",
"temperature": 0.1
}npm install replicate
REPLICATE_API_TOKEN environment variable:export REPLICATE_API_TOKEN=r8_3gZ**********************************
This is your API token. Keep it to yourself.
import Replicate from "replicate";
const replicate = new Replicate({
auth: process.env.REPLICATE_API_TOKEN,
});
Run kcaverly/neuralbeagle14-7b-gguf using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run(
"kcaverly/neuralbeagle14-7b-gguf:4c5eaa791be134c9711ea3a93e62f699540f733fd83c3a2d3df21be42dbdda25",
{
input: {
max_new_tokens: 512,
prompt: "How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
prompt_template: "<|im_start|>system\n{system_prompt}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant: ",
repeat_penalty: 1.1,
system_prompt: "You are 'Neural-Beagle', an AI assistant and your purpose and drive is to assist the user with any request they have.",
temperature: 0.1
}
}
);
console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
pip install replicate
REPLICATE_API_TOKEN environment variable:export REPLICATE_API_TOKEN=r8_3gZ**********************************
This is your API token. Keep it to yourself.
import replicate
Run kcaverly/neuralbeagle14-7b-gguf using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run(
"kcaverly/neuralbeagle14-7b-gguf:4c5eaa791be134c9711ea3a93e62f699540f733fd83c3a2d3df21be42dbdda25",
input={
"max_new_tokens": 512,
"prompt": "How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
"prompt_template": "<|im_start|>system\n{system_prompt}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant: ",
"repeat_penalty": 1.1,
"system_prompt": "You are 'Neural-Beagle', an AI assistant and your purpose and drive is to assist the user with any request they have.",
"temperature": 0.1
}
)
# The kcaverly/neuralbeagle14-7b-gguf model can stream output as it's running.
# The predict method returns an iterator, and you can iterate over that output.
for item in output:
# https://replicate.com/kcaverly/neuralbeagle14-7b-gguf/api#output-schema
print(item, end="")
To learn more, take a look at the guide on getting started with Python.
REPLICATE_API_TOKEN environment variable:export REPLICATE_API_TOKEN=r8_3gZ**********************************
This is your API token. Keep it to yourself.
Run kcaverly/neuralbeagle14-7b-gguf using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \
-H "Authorization: Bearer $REPLICATE_API_TOKEN" \
-H "Content-Type: application/json" \
-H "Prefer: wait" \
-d $'{
"version": "kcaverly/neuralbeagle14-7b-gguf:4c5eaa791be134c9711ea3a93e62f699540f733fd83c3a2d3df21be42dbdda25",
"input": {
"max_new_tokens": 512,
"prompt": "How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
"prompt_template": "<|im_start|>system\\n{system_prompt}<|im_end|>\\n<|im_start|>user\\n{prompt}<|im_end|>\\n<|im_start|>assistant: ",
"repeat_penalty": 1.1,
"system_prompt": "You are \'Neural-Beagle\', an AI assistant and your purpose and drive is to assist the user with any request they have.",
"temperature": 0.1
}
}' \
https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
This is an amusing riddle or phrase that doesn't have a literal answer. It's often used to illustrate the absurdity of asking how much something would do if it couldn't naturally perform that action. In this case, woodchucks (also known as groundhogs) are not known for chucking wood, as their diet consists mainly of plants and they dig burrows. So, there is no specific amount of wood a woodchuck could or would chuck if it were possible for them to do so.
{
"id": "azutjsbbrm7sfnix2w2w2qkxri",
"model": "kcaverly/neuralbeagle14-7b-gguf",
"version": "4c5eaa791be134c9711ea3a93e62f699540f733fd83c3a2d3df21be42dbdda25",
"input": {
"max_new_tokens": 512,
"prompt": "How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
"prompt_template": "<|im_start|>system\n{system_prompt}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant: ",
"repeat_penalty": 1.1,
"system_prompt": "You are 'Neural-Beagle', an AI assistant and your purpose and drive is to assist the user with any request they have.",
"temperature": 0.1
},
"logs": "llama_print_timings: load time = 856.31 ms\nllama_print_timings: sample time = 82.09 ms / 118 runs ( 0.70 ms per token, 1437.53 tokens per second)\nllama_print_timings: prompt eval time = 856.09 ms / 91 tokens ( 9.41 ms per token, 106.30 tokens per second)\nllama_print_timings: eval time = 6084.68 ms / 117 runs ( 52.01 ms per token, 19.23 tokens per second)\nllama_print_timings: total time = 7623.67 ms",
"output": [
"\n",
"This",
" is",
" an",
" am",
"using",
" r",
"iddle",
" or",
" phrase",
" that",
" doesn",
"'",
"t",
" have",
" a",
" literal",
" answer",
".",
" It",
"'",
"s",
" often",
" used",
" to",
" illustr",
"ate",
" the",
" absurd",
"ity",
" of",
" asking",
" how",
" much",
" something",
" would",
" do",
" if",
" it",
" couldn",
"'",
"t",
" naturally",
" perform",
" that",
" action",
".",
" In",
" this",
" case",
",",
" wood",
"ch",
"ucks",
" (",
"also",
" known",
" as",
" ground",
"h",
"ogs",
")",
" are",
" not",
" known",
" for",
" chuck",
"ing",
" wood",
",",
" as",
" their",
" diet",
" consists",
" mainly",
" of",
" plants",
" and",
" they",
" dig",
" bur",
"rows",
".",
" So",
",",
" there",
" is",
" no",
" specific",
" amount",
" of",
" wood",
" a",
" wood",
"ch",
"uck",
" could",
" or",
" would",
" chuck",
" if",
" it",
" were",
" possible",
" for",
" them",
" to",
" do",
" so",
".",
"\n",
"",
""
],
"data_removed": false,
"error": null,
"source": "web",
"status": "succeeded",
"created_at": "2024-01-19T22:14:32.568492Z",
"started_at": "2024-01-19T22:18:52.151537Z",
"completed_at": "2024-01-19T22:18:59.79876Z",
"urls": {
"cancel": "https://api.replicate.com/v1/predictions/azutjsbbrm7sfnix2w2w2qkxri/cancel",
"get": "https://api.replicate.com/v1/predictions/azutjsbbrm7sfnix2w2w2qkxri",
"stream": "https://streaming-api.svc.us-central1.g.replicate.net/v1/predictions/azutjsbbrm7sfnix2w2w2qkxri"
},
"metrics": {
"predict_time": 7.647223,
"total_time": 267.230268
}
}