hamelsmu/llama-2-13b-chat-hf

Public
8 runs

Input

pip install replicate
Set the REPLICATE_API_TOKEN environment variable:
export REPLICATE_API_TOKEN=<paste-your-token-here>

Find your API token in your account settings.

Import the client:
import replicate

Run hamelsmu/llama-2-13b-chat-hf using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.

output = replicate.run(
    "hamelsmu/llama-2-13b-chat-hf:6aef4f04938605319fb2039146b06e07047aeb9afc0f75b7e011aaf666cc3dd6",
    input={}
)

print(output)

To learn more, take a look at the guide on getting started with Python.

Output

No output yet! Press "Submit" to start a prediction.

Run time and cost

This model runs on Nvidia L40S GPU hardware. We don't yet have enough runs of this model to provide performance information.

Readme

This model doesn't have a readme.