hazelnutcloud
/
solar-10.7b-instruct-uncensored
- Public
- 4.6K runs
Run hazelnutcloud/solar-10.7b-instruct-uncensored with an API
Use one of our client libraries to get started quickly. Clicking on a library will take you to the Playground tab where you can tweak different inputs, see the results, and copy the corresponding code to use in your own project.
Input schema
The fields you can use to run this model with an API. If you don't give a value for a field its default value will be used.
Field | Type | Default value | Description |
---|---|---|---|
prompt |
string
|
<s> ### User:
What's the largest planet in the solar system?
### Assistant:
|
The prompt to generate text from.
|
max_tokens |
integer
|
16
|
The maximum number of tokens to generate. If max_tokens <= 0 or None, the maximum number of tokens to generate is unlimited and depends on n_ctx.
|
temperature |
number
|
0.8
|
The temperature to use for sampling.
|
top_p |
number
|
0.95
|
The nucleus sampling probability.
|
min_p |
number
|
0.05
|
The minimum probability to keep when using nucleus sampling.
|
typical_p |
number
|
1
|
The typical probability to keep when using nucleus sampling.
|
frequency_penalty |
number
|
0
|
The frequency penalty to use.
|
presence_penalty |
number
|
0
|
The presence penalty to use.
|
repeat_penalty |
number
|
1.1
|
The repeat penalty to use.
|
top_k |
integer
|
40
|
The number of highest probability vocabulary tokens to keep for top-k sampling.
|
stop |
string
|
|
The stop sequence to use.
|
{
"type": "object",
"title": "Input",
"properties": {
"stop": {
"type": "string",
"title": "Stop",
"default": "\n",
"x-order": 10,
"description": "The stop sequence to use."
},
"min_p": {
"type": "number",
"title": "Min P",
"default": 0.05,
"x-order": 4,
"description": "The minimum probability to keep when using nucleus sampling."
},
"top_k": {
"type": "integer",
"title": "Top K",
"default": 40,
"x-order": 9,
"description": "The number of highest probability vocabulary tokens to keep for top-k sampling."
},
"top_p": {
"type": "number",
"title": "Top P",
"default": 0.95,
"x-order": 3,
"description": "The nucleus sampling probability."
},
"prompt": {
"type": "string",
"title": "Prompt",
"default": "<s> ### User:\nWhat's the largest planet in the solar system?\n\n### Assistant:\n",
"x-order": 0,
"description": "The prompt to generate text from."
},
"typical_p": {
"type": "number",
"title": "Typical P",
"default": 1,
"x-order": 5,
"description": "The typical probability to keep when using nucleus sampling."
},
"max_tokens": {
"type": "integer",
"title": "Max Tokens",
"default": 16,
"x-order": 1,
"description": "The maximum number of tokens to generate. If max_tokens <= 0 or None, the maximum number of tokens to generate is unlimited and depends on n_ctx."
},
"temperature": {
"type": "number",
"title": "Temperature",
"default": 0.8,
"x-order": 2,
"description": "The temperature to use for sampling."
},
"repeat_penalty": {
"type": "number",
"title": "Repeat Penalty",
"default": 1.1,
"x-order": 8,
"description": "The repeat penalty to use."
},
"presence_penalty": {
"type": "number",
"title": "Presence Penalty",
"default": 0,
"x-order": 7,
"description": "The presence penalty to use."
},
"frequency_penalty": {
"type": "number",
"title": "Frequency Penalty",
"default": 0,
"x-order": 6,
"description": "The frequency penalty to use."
}
}
}
Output schema
The shape of the response you’ll get when you run this model with an API.
Schema
{
"type": "array",
"items": {
"type": "string"
},
"title": "Output",
"x-cog-array-type": "iterator"
}