meta / codellama-34b-instruct

A 34 billion parameter Llama tuned for coding and conversation

  • Public
  • 155.4K runs
  • L40S
  • GitHub
  • Paper
  • License
Iterate in playground

Run meta/codellama-34b-instruct with an API

Input schema

top_kinteger

Top K

Default
10
top_pnumber

Top P

Default
0.95
promptstring

Prompt

max_tokensinteger

Max number of tokens to return

Default
500
temperaturenumber

Temperature

Default
0.8
system_promptstring

System prompt to send to CodeLlama. This is prepended to the prompt and helps guide system behavior.

repeat_penaltynumber

Repetition penalty

Default
1.1
Maximum
2
presence_penaltynumber

Presence penalty

Maximum
2
frequency_penaltynumber

Frequency penalty

Maximum
2

Output schema

Type
string[]