You're looking at a specific version of this model. Jump to the model overview.
Input schema
The fields you can use to run this model with an API. If you don’t give a value for a field its default value will be used.
Field | Type | Default value | Description |
---|---|---|---|
prompt |
string
|
Prompt to send to the model.
|
|
system_prompt |
string
|
You are a helpful assistant
|
System prompt to send to the model. This is prepended to the prompt and helps guide system behavior.
|
max_tokens |
integer
|
512
Min: 1 |
Maximum number of tokens to generate. A word is generally 2-3 tokens.
|
min_tokens |
integer
|
Min: -1 |
Minimum number of tokens to generate. To disable, set to -1. A word is generally 2-3 tokens.
|
temperature |
number
|
0.7
Max: 5 |
Adjusts randomness of outputs, greater than 1 is random and 0 is deterministic, 0.75 is a good starting value.
|
top_p |
number
|
0.95
Max: 1 |
When decoding text, samples from the top p percentage of most likely tokens; lower to ignore less likely tokens.
|
top_k |
integer
|
0
Min: -1 |
When decoding text, samples from the top k most likely tokens; lower to ignore less likely tokens.
|
stop_sequences |
string
|
<|end_of_text|>,<|eot_id|>
|
A comma-separated list of sequences to stop generation at. For example, '<end>,<stop>' will stop generation at the first instance of 'end' or '<stop>'.
|
length_penalty |
number
|
1
Max: 5 |
A parameter that controls how long the outputs are. If < 1, the model will tend to generate shorter outputs, and > 1 will tend to generate longer outputs.
|
presence_penalty |
number
|
0
|
A parameter that penalizes repeated tokens regardless of the number of appearances. As the value increases, the model will be less likely to repeat tokens in the output.
|
seed |
integer
|
Random seed. Leave blank to randomize the seed.
|
|
prompt_template |
string
|
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
|
Template for formatting the prompt. Can be an arbitrary string, but must contain the substring `{prompt}`.
|
log_performance_metrics |
boolean
|
False
|
None
|
max_new_tokens |
integer
|
Min: 1 |
This parameter has been renamed to max_tokens. max_new_tokens only exists for backwards compatibility purposes. We recommend you use max_tokens instead. Both may not be specified.
|
min_new_tokens |
integer
|
Min: -1 |
This parameter has been renamed to min_tokens. min_new_tokens only exists for backwards compatibility purposes. We recommend you use min_tokens instead. Both may not be specified.
|
Output schema
The shape of the response you’ll get when you run this model with an API.
Schema
{'items': {'type': 'string'},
'title': 'Output',
'type': 'array',
'x-cog-array-display': 'concatenate',
'x-cog-array-type': 'iterator'}