You're looking at a specific version of this model. Jump to the model overview.

tomasmcm /mistral-7b-instruct-v0.2:366548f0

Input

*string
Shift + Return to add a new line

Text prompt to send to the model.

integer

Maximum number of tokens to generate per output sequence.

Default: 128

number
(minimum: -5, maximum: 5)

Float that penalizes new tokens based on whether they appear in the generated text so far. Values > 0 encourage the model to use new tokens, while values < 0 encourage the model to repeat tokens.

Default: 0

number
(minimum: -5, maximum: 5)

Float that penalizes new tokens based on their frequency in the generated text so far. Values > 0 encourage the model to use new tokens, while values < 0 encourage the model to repeat tokens.

Default: 0

number
(minimum: 0.01, maximum: 5)

Float that controls the randomness of the sampling. Lower values make the model more deterministic, while higher values make the model more random. Zero means greedy sampling.

Default: 0.8

number
(minimum: 0.01, maximum: 1)

Float that controls the cumulative probability of the top tokens to consider. Must be in (0, 1]. Set to 1 to consider all tokens.

Default: 0.95

integer

Integer that controls the number of top tokens to consider. Set to -1 to consider all tokens.

Default: -1

string
Shift + Return to add a new line

List of strings that stop the generation when they are generated. The returned output will not contain the stop strings.

Output

🌳 Alice: Hey Henk, I see you're working on something new at your desk. What's that fascinating gadget you've got there? 🌿 Henk: Hi Alice! That's Tiefigther, a language model I've recently discovered. It's an advanced AI system designed to understand and generate human language. Isn't that cool? 🌺 Alice: Language models? I've heard of them before, but could you explain what makes Tiefigther unique? 🌼 Henk: Absolutely
Generated in