Readme
This model doesn't have a readme.
pip install replicate
REPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
import replicate
Run mozeal/seallm-7b-v2 using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run(
"mozeal/seallm-7b-v2:eed2b75784bab3220aed087368a268ad3b84e79b7d90a71d66f644ac12a8be2e",
input={
"debug": False,
"top_k": 1,
"top_p": 0.95,
"prompt": "เล่านิทานก่อนนอนเรื่องลูกหมูน้อยผจญภัย",
"temperature": 0.75,
"system_prompt": "You are helful assistant.",
"max_new_tokens": 512,
"min_new_tokens": -1
}
)
# The mozeal/seallm-7b-v2 model can stream output as it's running.
# The predict method returns an iterator, and you can iterate over that output.
for item in output:
# https://replicate.com/mozeal/seallm-7b-v2/api#output-schema
print(item, end="")
To learn more, take a look at the guide on getting started with Python.
No output yet! Press "Submit" to start a prediction.
This model runs on Nvidia L40S GPU hardware. We don't yet have enough runs of this model to provide performance information.
This model doesn't have a readme.