You're looking at a specific version of this model. Jump to the model overview.

technillogue /llama-2-7b-chat-hf-mlc:2169b1f5

Input schema

The fields you can use to run this model with an API. If you don’t give a value for a field its default value will be used.

Field Type Default value Description
prompt
string
Prompt to send to the model.
system_prompt
string
You are a helpful, respectful and honest assistant.
System prompt to send to the model. This is prepended to the prompt and helps guide system behavior.
max_new_tokens
integer
128

Min: 1

Maximum number of tokens to generate. A word is generally 2-3 tokens
min_new_tokens
integer
-1

Min: -1

Minimum number of tokens to generate. To disable, set to -1. A word is generally 2-3 tokens.
temperature
number
0.7

Min: 0.01

Max: 5

Adjusts randomness of outputs, greater than 1 is random and 0 is deterministic, 0.75 is a good starting value.
top_p
number
0.95

Max: 1

When decoding text, samples from the top p percentage of most likely tokens; lower to ignore less likely tokens
repetition_penalty
number
1.15
A parameter that controls how repetitive text can be. Lower means more repetitive, while higher means less repetitive. Set to 1.0 to disable.
stop_sequences
string
A comma-separated list of sequences to stop generation at. For example, '<end>,<stop>' will stop generation at the first instance of 'end' or '<stop>'.
seed
integer
Random seed. Leave blank to randomize the seed
debug
boolean
False
provide debugging output in logs
webrtc_offer
string
instead of a single prediction, handle a WebRTC offer as json, optionally with an ice_server key of ICE servers to use for connecting

Output schema

The shape of the response you’ll get when you run this model with an API.

Schema
{'items': {'type': 'string'},
 'title': 'Output',
 'type': 'array',
 'x-cog-array-display': 'concatenate',
 'x-cog-array-type': 'iterator'}