tanzir11 / llm

  • Public
  • 102 runs
  • L40S

Input

Run this model in Node.js with one line of code:

npx create-replicate --model=tanzir11/llm
or set up a project from scratch
npm install replicate
Set the REPLICATE_API_TOKEN environment variable:
export REPLICATE_API_TOKEN=<paste-your-token-here>

Find your API token in your account settings.

Import and set up the client:
import Replicate from "replicate";

const replicate = new Replicate({
  auth: process.env.REPLICATE_API_TOKEN,
});

Run tanzir11/llm using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.

const output = await replicate.run(
  "tanzir11/llm:0115fd6c6a9b1deed47a63258c13a0ee65cc3eeb5c20a747bbd651914e602237",
  {
    input: {
      debug: false,
      top_k: 50,
      top_p: 0.9,
      temperature: 0.75,
      system_prompt: "You are a helpful assistant.",
      max_new_tokens: 128,
      min_new_tokens: -1
    }
  }
);
console.log(output);

To learn more, take a look at the guide on getting started with Node.js.

Output

No output yet! Press "Submit" to start a prediction.

Run time and cost

This model costs approximately $0.013 to run on Replicate, or 76 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia L40S GPU hardware. Predictions typically complete within 14 seconds. The predict time for this model varies significantly based on the inputs.

Readme

This model doesn't have a readme.