meta
/
llama-2-13b-chat
A 13 billion parameter language model from Meta, fine tuned for chat completions
Run replicate-internal/llama-2-13b-chat-int8-1xa100-80gb-triton with an API
Use one of our client libraries to get started quickly.
Set the REPLICATE_API_TOKEN
environment variable
export REPLICATE_API_TOKEN=<paste-your-token-here>
Learn more about authentication
Install Replicate’s Node.js client library
npm install replicate
Run meta/llama-2-13b-chat using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
import Replicate from "replicate";
const replicate = new Replicate();
const input = {
top_p: 1,
prompt: "Write a story in the style of James Joyce. The story should be about a trip to the Irish countryside in 2083, to see the beautiful scenery and robots.",
temperature: 0.75,
max_new_tokens: 500
};
for await (const event of replicate.stream("meta/llama-2-13b-chat", { input })) {
process.stdout.write(`${event}`)
};
//=> " Sure, I'd be happy to help! Here's a story in the style...