You're looking at a specific version of this model. Jump to the model overview.

lucataco /sandbox:15d15b2a

Input schema

The fields you can use to run this model with an API. If you don’t give a value for a field its default value will be used.

Field Type Default value Description
prompt
string
Prompt to send to the model.
min_new_tokens
integer
256
The minimum number of tokens the model should generate as output. A word is generally 2-3 tokens.
max_new_tokens
integer
163840
The maximum number of tokens the model should generate as output. A word is generally 2-3 tokens.
temperature
number
0.3
The value used to modulate the next token probabilities. Adjusts randomness of outputs, greater than 1 is random and 0 is deterministic, 0.75 is a good starting value.
top_p
number
0.9
A probability threshold for generating the output. If < 1.0, only keep the top tokens with cumulative probability >= top_p (nucleus filtering). Nucleus filtering is described in Holtzman et al. (http://arxiv.org/abs/1904.09751). Lower to ignore less likely tokens.
top_k
integer
50
The number of highest probability tokens to consider for generating the output. If > 0, only keep the top k tokens with highest probability (top-k filtering). Lower to ignore less likely tokens
presence_penalty
number
1.15
A parameter that penalizes repeated tokens regardless of the number of appearances. As the value increases, the model will be less likely to repeat tokens in the output.
frequency_penalty
number
0.2
Frequency penalty is similar to presence penalty, but while presence penalty applies to all tokens that have been sampled at least once, the frequency penalty proportional to how often a particular token has already been sampled.
prompt_template
string
<|im_start|>system You're a helpful assistant<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant
Prompt template. The string `{prompt}` will be substituted for the input prompt. If you want to generate dialog output, use this template as a starting point and construct the prompt string manually, leaving `prompt_template={prompt}`.
stop_sequences
string
<|im_end|>
A comma-separated list of sequences to stop generation at. For example, '<end>,<stop>' will stop generation at the first instance of 'end' or '<stop>'.

Output schema

The shape of the response you’ll get when you run this model with an API.

Schema
{'items': {'type': 'string'},
 'title': 'Output',
 'type': 'array',
 'x-cog-array-display': 'concatenate',
 'x-cog-array-type': 'iterator'}