meta / codellama-7b

A 7 billion parameter Llama tuned for coding and conversation

  • Public
  • 15.4K runs
  • L40S
  • GitHub
  • Paper
  • License

Input

*string
Shift + Return to add a new line

Prompt to send to CodeLlama.

integer
(minimum: 1)

Maximum number of tokens to generate. A word is generally 2-3 tokens

Default: 128

number
(minimum: 0.01, maximum: 5)

Adjusts randomness of outputs, greater than 1 is random and 0 is deterministic, 0.75 is a good starting value.

Default: 0.75

number
(minimum: 0, maximum: 1)

When decoding text, samples from the top p percentage of most likely tokens; lower to ignore less likely tokens

Default: 0.9

integer
(minimum: 0)

When decoding text, samples from the top k most likely tokens; lower to ignore less likely tokens

Default: 50

string
Shift + Return to add a new line

A comma-separated list of sequences to stop generation at. For example, '<end>,<stop>' will stop generation at the first instance of 'end' or '<stop>'.

boolean

provide debugging output in logs

Default: false

Output

a, b): return a + b # function to subtract 2 numbers def subtract(a, b): return a - b # function to multiply 2 numbers def multiply(a, b): return a *b; # semicolon is used to terminate the line and execute the next line. if we forget semicolon then it will
Generated in

This example was created by a different version, meta/codellama-7b:6cae5ee8.

Run time and cost

This model costs approximately $0.00098 to run on Replicate, or 1020 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia L40S GPU hardware. Predictions typically complete within 1 seconds.

Readme

CodeLlama is a family of fine-tuned Llama 2 models for coding. This is CodeLlama-7b, a 7 billion parameter Llama model tuned for completing code.