You're looking at a specific version of this model. Jump to the model overview.
camenduru /mixtral-8x22b-v0.1-instruct-oh:e773421c
Input schema
The fields you can use to run this model with an API. If you don’t give a value for a field its default value will be used.
Field | Type | Default value | Description |
---|---|---|---|
prompt |
string
|
Tell me a story about the Cheesecake Kingdom.
|
None
|
max_tokens |
integer
|
256
|
Maximum number of tokens to generate per output sequence.
|
min_tokens |
integer
|
1
|
Minimum number of tokens to generate per output sequence.
|
presence_penalty |
number
|
0
|
Float that penalizes new tokens based on whether they
appear in the generated text so far. Values > 0 encourage the model
to use new tokens, while values < 0 encourage the model to repeat
tokens.
|
frequency_penalty |
number
|
0
|
Float that penalizes new tokens based on their
frequency in the generated text so far. Values > 0 encourage the
model to use new tokens, while values < 0 encourage the model to
repeat tokens.
|
repetition_penalty |
number
|
2
|
Float that penalizes new tokens based on whether
they appear in the prompt and the generated text so far. Values > 1
encourage the model to use new tokens, while values < 1 encourage
the model to repeat tokens.
|
length_penalty |
number
|
1
|
Float that penalizes sequences based on their length.
Used in beam search.
|
temperature |
number
|
0.6
|
Float that controls the randomness of the sampling. Lower
values make the model more deterministic, while higher values make
the model more random. Zero means greedy sampling.
|
top_p |
number
|
1
|
Float that controls the cumulative probability of the top tokens
to consider. Must be in (0, 1]. Set to 1 to consider all tokens.
|
top_k |
integer
|
40
|
Integer that controls the number of top tokens to consider. Set
to -1 to consider all tokens.
|
min_p |
number
|
0
|
Float that represents the minimum probability for a token to be
considered, relative to the probability of the most likely token.
Must be in [0, 1]. Set to 0 to disable this.
|
ignore_eos |
boolean
|
False
|
Whether to ignore the EOS token and continue generating
tokens after the EOS token is generated.
|
system_prompt |
string
|
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
|
None
|
template |
string
|
{system_prompt} {prompt}
|
SYSTEM:{system_prompt} USER:{prompt}
|
Output schema
The shape of the response you’ll get when you run this model with an API.
Schema
{'title': 'Output', 'type': 'string'}