hamelsmu
/
test
- Public
- 0 runs
Run hamelsmu/test with an API
Use one of our client libraries to get started quickly. Clicking on a library will take you to the Playground tab where you can tweak different inputs, see the results, and copy the corresponding code to use in your own project.
Input schema
The fields you can use to run this model with an API. If you don't give a value for a field its default value will be used.
Field | Type | Default value | Description |
---|---|---|---|
prompt |
string
|
None
|
|
max_new_tokens |
integer
|
512
|
The maximum number of tokens the model should generate as output.
|
temperature |
number
|
0.7
|
The value used to modulate the next token probabilities.
|
do_sample |
boolean
|
True
|
Whether or not to use sampling; otherwise use greedy decoding.
|
top_p |
number
|
0.95
|
A probability threshold for generating the output. If < 1.0, only keep the top tokens with cumulative probability >= top_p (nucleus filtering). Nucleus filtering is described in Holtzman et al. (http://arxiv.org/abs/1904.09751).
|
top_k |
integer
|
50
|
The number of highest probability tokens to consider for generating the output. If > 0, only keep the top k tokens with highest probability (top-k filtering).
|
prompt_template |
string
|
### System:
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
Translate the input from English to Hinglish
### Input:
{prompt}
### Response:
|
The template used to format the prompt before passing it to the model. For no template, you can set this to `{prompt}`.
|
{
"type": "object",
"title": "Input",
"required": [
"prompt"
],
"properties": {
"top_k": {
"type": "integer",
"title": "Top K",
"default": 50,
"x-order": 5,
"description": "The number of highest probability tokens to consider for generating the output. If > 0, only keep the top k tokens with highest probability (top-k filtering)."
},
"top_p": {
"type": "number",
"title": "Top P",
"default": 0.95,
"x-order": 4,
"description": "A probability threshold for generating the output. If < 1.0, only keep the top tokens with cumulative probability >= top_p (nucleus filtering). Nucleus filtering is described in Holtzman et al. (http://arxiv.org/abs/1904.09751)."
},
"prompt": {
"type": "string",
"title": "Prompt",
"x-order": 0
},
"do_sample": {
"type": "boolean",
"title": "Do Sample",
"default": true,
"x-order": 3,
"description": "Whether or not to use sampling; otherwise use greedy decoding."
},
"temperature": {
"type": "number",
"title": "Temperature",
"default": 0.7,
"x-order": 2,
"description": "The value used to modulate the next token probabilities."
},
"max_new_tokens": {
"type": "integer",
"title": "Max New Tokens",
"default": 512,
"x-order": 1,
"description": "The maximum number of tokens the model should generate as output."
},
"prompt_template": {
"type": "string",
"title": "Prompt Template",
"default": "### System:\nBelow is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\nTranslate the input from English to Hinglish\n\n### Input:\n{prompt}\n\n### Response:\n ",
"x-order": 6,
"description": "The template used to format the prompt before passing it to the model. For no template, you can set this to `{prompt}`."
}
}
}
Output schema
The shape of the response you’ll get when you run this model with an API.
Schema
{
"type": "array",
"items": {
"type": "string"
},
"title": "Output",
"x-cog-array-type": "iterator",
"x-cog-array-display": "concatenate"
}