You're looking at a specific version of this model. Jump to the model overview.
hazelnutcloud /solar-10.7b-instruct-uncensored:75d2421f
Input schema
The fields you can use to run this model with an API. If you don’t give a value for a field its default value will be used.
Field | Type | Default value | Description |
---|---|---|---|
prompt |
string
|
<s> ### User:
What's the largest planet in the solar system?
### Assistant:
|
The prompt to generate text from.
|
max_tokens |
integer
|
16
|
The maximum number of tokens to generate. If max_tokens <= 0 or None, the maximum number of tokens to generate is unlimited and depends on n_ctx.
|
temperature |
number
|
0.8
|
The temperature to use for sampling.
|
top_p |
number
|
0.95
|
The nucleus sampling probability.
|
min_p |
number
|
0.05
|
The minimum probability to keep when using nucleus sampling.
|
typical_p |
number
|
1
|
The typical probability to keep when using nucleus sampling.
|
frequency_penalty |
number
|
0
|
The frequency penalty to use.
|
presence_penalty |
number
|
0
|
The presence penalty to use.
|
repeat_penalty |
number
|
1.1
|
The repeat penalty to use.
|
top_k |
integer
|
40
|
The number of highest probability vocabulary tokens to keep for top-k sampling.
|
stop |
string
|
|
The stop sequence to use.
|
Output schema
The shape of the response you’ll get when you run this model with an API.
Schema
{'items': {'type': 'string'},
'title': 'Output',
'type': 'array',
'x-cog-array-display': 'concatenate',
'x-cog-array-type': 'iterator'}