Readme
Generated using an experimental Hugging Face builder tool, using mistralai/Mistral-7B-Instruct-v0.1 as the base model.
See https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1
An experimental copy of the Mistral LLM
REPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run zeke/zistral using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \
-H "Authorization: Bearer $REPLICATE_API_TOKEN" \
-H "Content-Type: application/json" \
-H "Prefer: wait" \
-d $'{
"version": "zeke/zistral:557744d221767a6cffc91909394f8cf878b0af00094a8846dd48bfe7931f4463",
"input": {
"top_k": 50,
"top_p": 1,
"temperature": 1,
"max_new_tokens": 256
}
}' \
https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
No output yet! Press "Submit" to start a prediction.
This model runs on Nvidia L40S GPU hardware. We don't yet have enough runs of this model to provide performance information.
Generated using an experimental Hugging Face builder tool, using mistralai/Mistral-7B-Instruct-v0.1 as the base model.
See https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1