meta
/
llama-2-13b
Base version of Llama 2 13B, a 13 billion parameter language model
Run replicate-internal/llama-2-13b-int8-1xa100-80gb-triton with an API
Use one of our client libraries to get started quickly.
Set the REPLICATE_API_TOKEN
environment variable
export REPLICATE_API_TOKEN=<paste-your-token-here>
Learn more about authentication
Install Replicate’s Node.js client library
npm install replicate
Run meta/llama-2-13b using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
import Replicate from "replicate";
const replicate = new Replicate();
const input = {
top_k: 50,
top_p: 0.9,
prompt: "Once upon a time a llama explored",
temperature: 0.75,
max_new_tokens: 128,
min_new_tokens: -1
};
for await (const event of replicate.stream("meta/llama-2-13b", { input })) {
process.stdout.write(`${event}`)
};
//=> " the wilderness with his mother.\nHe was hungry and look...