meta
/
llama-2-13b-chat
A 13 billion parameter language model from Meta, fine tuned for chat completions
Run replicate-internal/llama-2-13b-chat-int8-1xa100-80gb-triton with an API
Input schema
Random seed. Leave blank to randomize the seed.
When decoding text, samples from the top k most likely tokens; lower to ignore less likely tokens.
- Minimum
- -1
When decoding text, samples from the top p percentage of most likely tokens; lower to ignore less likely tokens.
- Default
- 0.95
- Maximum
- 1
Prompt to send to the model.
Maximum number of tokens to generate. A word is generally 2-3 tokens.
- Default
- 512
- Minimum
- 1
Minimum number of tokens to generate. To disable, set to -1. A word is generally 2-3 tokens.
- Minimum
- -1
Adjusts randomness of outputs, greater than 1 is random and 0 is deterministic, 0.75 is a good starting value.
- Default
- 0.7
- Maximum
- 5
System prompt to send to the model. This is prepended to the prompt and helps guide system behavior.
- Default
- "You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.\n\nIf a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information."
A parameter that controls how long the outputs are. If < 1, the model will tend to generate shorter outputs, and > 1 will tend to generate longer outputs.
- Default
- 1
- Maximum
- 5
This parameter has been renamed to max_tokens. max_new_tokens only exists for backwards compatibility purposes. We recommend you use max_tokens instead. Both may not be specified.
- Minimum
- 1
This parameter has been renamed to min_tokens. min_new_tokens only exists for backwards compatibility purposes. We recommend you use min_tokens instead. Both may not be specified.
- Minimum
- -1
A comma-separated list of sequences to stop generation at. For example, '<end>,<stop>' will stop generation at the first instance of 'end' or '<stop>'.
Template for formatting the prompt. Can be an arbitrary string, but must contain the substring `{prompt}`.
- Default
- "<s>[INST] <<SYS>>\n{system_prompt}\n<</SYS>>\n\n{prompt} [/INST]"
A parameter that penalizes repeated tokens regardless of the number of appearances. As the value increases, the model will be less likely to repeat tokens in the output.
Output schema
- Type
- string[]