01-ai
/
yi-34b-chat
The Yi series models are large language models trained from scratch by developers at 01.AI.
Run 01-ai/yi-34b-chat with an API
Input schema
The number of highest probability tokens to consider for generating the output. If > 0, only keep the top k tokens with highest probability (top-k filtering).
- Default
- 50
A probability threshold for generating the output. If < 1.0, only keep the top tokens with cumulative probability >= top_p (nucleus filtering). Nucleus filtering is described in Holtzman et al. (http://arxiv.org/abs/1904.09751).
- Default
- 0.8
The value used to modulate the next token probabilities.
- Default
- 0.3
The maximum number of tokens the model should generate as output.
- Default
- 1024
The template used to format the prompt. The input prompt is inserted into the template using the `{prompt}` placeholder.
- Default
- "<|im_start|>system\nYou are a helpful assistant<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant\n"
Repetition penalty
- Default
- 1.2
Output schema
- Type
- string[]