You're looking at a specific version of this model. Jump to the model overview.
lucataco /dolphin-2.2.1-mistral-7b:0521a009
Input schema
The fields you can use to run this model with an API. If you don’t give a value for a field its default value will be used.
Field | Type | Default value | Description |
---|---|---|---|
prompt |
string
|
None
|
|
max_new_tokens |
integer
|
512
|
The maximum number of tokens the model should generate as output.
|
temperature |
number
|
0.8
|
The value used to modulate the next token probabilities.
|
top_p |
number
|
0.95
|
A probability threshold for generating the output. If < 1.0, only keep the top tokens with cumulative probability >= top_p (nucleus filtering). Nucleus filtering is described in Holtzman et al. (http://arxiv.org/abs/1904.09751).
|
top_k |
integer
|
50
|
The number of highest probability tokens to consider for generating the output. If > 0, only keep the top k tokens with highest probability (top-k filtering).
|
presence_penalty |
number
|
0
|
Presence penalty
|
frequency_penalty |
number
|
0
|
Frequency penalty
|
prompt_template |
string
|
<|im_start|>system
You are Dolphin, a helpful AI assistant.
<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
|
The template used to format the prompt. The input prompt is inserted into the template using the `{prompt}` placeholder.
|
Output schema
The shape of the response you’ll get when you run this model with an API.
Schema
{'items': {'type': 'string'},
'title': 'Output',
'type': 'array',
'x-cog-array-display': 'concatenate',
'x-cog-array-type': 'iterator'}