tomasmcm
/
llama-3-8b-instruct-gradient-4194k
Source: gradientai/Llama-3-8B-Instruct-Gradient-4194k ✦ Quant: solidrust/Llama-3-8B-Instruct-Gradient-4194k-AWQ ✦ Extending LLama-3 8B's context length from 8k to 4194K
Run tomasmcm/llama-3-8b-instruct-gradient-4194k with an API
Use one of our client libraries to get started quickly.
Set the REPLICATE_API_TOKEN
environment variable
export REPLICATE_API_TOKEN=<paste-your-token-here>
Learn more about authentication
Install Replicate’s Node.js client library
npm install replicate
Run tomasmcm/llama-3-8b-instruct-gradient-4194k using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
import Replicate from "replicate";
const replicate = new Replicate();
const input = {
stop: "</s>",
prompt: "<|start_header_id|>system<|end_header_id|>\nYou are a helpful assistant. Perform the task to the best of your ability.<|eot_id|>\n<|start_header_id|>user<|end_header_id|>\nYou're standing on the surface of the Earth. You walk one mile south, one mile west and one mile north. You end up exactly where you started. Where are you?<|eot_id|>\n<|start_header_id|>assistant<|end_header_id|>\n",
max_tokens: 1024
};
const output = await replicate.run("tomasmcm/llama-3-8b-instruct-gradient-4194k:18b7a95a92c796e31fb118bcf70557f91dd4cf72c466cdc04ddce394331b09ac", { input });
console.log(output)
//=> "I would be back where I started!"