meta /llama-2-7b:acdbe5a4

Input

*string
Shift + Return to add a new line

Prompt to send to Llama v2.

integer
(minimum: 1)

Maximum number of tokens to generate. A word is generally 2-3 tokens

Default: 500

integer
(minimum: -1)

Minimum number of tokens to generate. To disable, set to -1. A word is generally 2-3 tokens.

Default: -1

number
(minimum: 0.01, maximum: 5)

Adjusts randomness of outputs, greater than 1 is random and 0 is deterministic, 0.75 is a good starting value.

Default: 0.95

number
(minimum: 0, maximum: 1)

When decoding text, samples from the top p percentage of most likely tokens; lower to ignore less likely tokens

Default: 0.95

integer
(minimum: 0)

When decoding text, samples from the top k most likely tokens; lower to ignore less likely tokens

Default: 250

number
(minimum: 0.01, maximum: 5)

Penalty for repeated words in generated text; 1 is no penalty, values greater than 1 discourage repetition, less than 1 encourage it.

Default: 1.15

integer
(minimum: -1)

Number of most recent tokens to apply repetition penalty to, -1 to apply to whole context

Default: 256

integer
(minimum: 1)

Gradually decrease penalty over this many tokens

Default: 128

boolean

provide debugging output in logs

Default: false

Output

No output yet! Press "Submit" to start a prediction.