meta / codellama-34b-instruct
A 34 billion parameter Llama tuned for coding and conversation
Run meta/codellama-34b-instruct with an API
Input schema
Top K
- Default
- 10
Top P
- Default
- 0.95
Prompt
Max number of tokens to return
- Default
- 500
Temperature
- Default
- 0.8
System prompt to send to CodeLlama. This is prepended to the prompt and helps guide system behavior.
Repetition penalty
- Default
- 1.1
- Maximum
- 2
Presence penalty
- Maximum
- 2
Frequency penalty
- Maximum
- 2
Output schema
- Type
- string[]