You're looking at a specific version of this model. Jump to the model overview.
deniyes /dolly-v2-12b-demo:ef548bcb
Input schema
The fields you can use to run this model with an API. If you don’t give a value for a field its default value will be used.
Field | Type | Default value | Description |
---|---|---|---|
prompt |
string
|
Input Prompt.
|
|
max_length |
integer
|
500
Min: 1 |
Maximum number of tokens to generate. A word is generally 2-3 tokens
|
decoding |
string
(enum)
|
top_p
Options: top_p, top_k |
Choose a decoding method
|
top_k |
integer
|
50
|
Valid if you choose top_k decoding. The number of highest probability vocabulary tokens to keep for top-k-filtering
|
top_p |
number
|
1
Min: 0.01 Max: 1 |
Valid if you choose top_p decoding. When decoding text, samples from the top p percentage of most likely tokens; lower to ignore less likely tokens
|
temperature |
number
|
0.75
Min: 0.01 Max: 5 |
Adjusts randomness of outputs, greater than 1 is random and 0 is deterministic, 0.75 is a good starting value.
|
repetition_penalty |
number
|
1.2
Min: 0.01 Max: 5 |
Penalty for repeated words in generated text; 1 is no penalty, values greater than 1 discourage repetition, less than 1 encourage it.
|
Output schema
The shape of the response you’ll get when you run this model with an API.
Schema
{'items': {'type': 'string'},
'title': 'Output',
'type': 'array',
'x-cog-array-display': 'concatenate',
'x-cog-array-type': 'iterator'}