You're looking at a specific version of this model. Jump to the model overview.

meta /codellama-7b:6cae5ee8

Input

*string
Shift + Return to add a new line

Prompt to send to CodeLlama.

integer
(minimum: 1)

Maximum number of tokens to generate. A word is generally 2-3 tokens

Default: 128

number
(minimum: 0.01, maximum: 5)

Adjusts randomness of outputs, greater than 1 is random and 0 is deterministic, 0.75 is a good starting value.

Default: 0.75

number
(minimum: 0, maximum: 1)

When decoding text, samples from the top p percentage of most likely tokens; lower to ignore less likely tokens

Default: 0.9

integer
(minimum: 0)

When decoding text, samples from the top k most likely tokens; lower to ignore less likely tokens

Default: 50

string
Shift + Return to add a new line

A comma-separated list of sequences to stop generation at. For example, '<end>,<stop>' will stop generation at the first instance of 'end' or '<stop>'.

boolean

provide debugging output in logs

Default: false

Output

No output yet! Press "Submit" to start a prediction.