Readme
Generated using an experimental Hugging Face builder tool, using mistralai/Mistral-7B-Instruct-v0.1 as the base model.
See https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1
An experimental copy of the Mistral LLM
Run this model in Node.js with one line of code:
npm install replicate
REPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
import Replicate from "replicate";
const replicate = new Replicate({
auth: process.env.REPLICATE_API_TOKEN,
});
Run zeke/zistral using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run(
"zeke/zistral:557744d221767a6cffc91909394f8cf878b0af00094a8846dd48bfe7931f4463",
{
input: {
top_k: 50,
top_p: 1,
temperature: 1,
max_new_tokens: 256
}
}
);
console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
No output yet! Press "Submit" to start a prediction.
This model runs on Nvidia L40S GPU hardware. We don't yet have enough runs of this model to provide performance information.
Generated using an experimental Hugging Face builder tool, using mistralai/Mistral-7B-Instruct-v0.1 as the base model.
See https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1