meta / codellama-34b-instruct

A 34 billion parameter Llama tuned for coding and conversation

  • Public
  • 152.3K runs
  • L40S
  • GitHub
  • Paper
  • License

Input

*string
Shift + Return to add a new line

Prompt

string
Shift + Return to add a new line

System prompt to send to CodeLlama. This is prepended to the prompt and helps guide system behavior.

Default: ""

integer

Max number of tokens to return

Default: 500

number

Temperature

Default: 0.8

number

Top P

Default: 0.95

integer

Top K

Default: 10

number
(minimum: 0, maximum: 2)

Frequency penalty

Default: 0

number
(minimum: 0, maximum: 2)

Presence penalty

Default: 0

number
(minimum: 0, maximum: 2)

Repetition penalty

Default: 1.1

Output

Arrgh, here be me code for ye, matey: ``` def fibonacci(n): if n <= 1: return n else: return fibonacci(n-1) + fibonacci(n-2) ``` Now, let's break down the code and talk like pirates, savvy? The function starts with a simple check: if the input `n` is less than or equal to 1, we just return it as the first two numbers in the sequence are 0 and 1. This be called the "base case." ``` if n <= 1: return n ``` But if the input `n` be greater than 1, we have a bit more work to do. We use a recursive call to find the previous two numbers in the sequence by calling the function again with `n-1`. This be called the "recursive case." ``` else: return fibonacci(n-1) + fibonacci(n-2) ``` Now, let's talk like pirates. The first two numbers in the sequence be 0 and 1, just like on the high seas. The next number be 1+1=2, which be like a ship with two masts. Then it be 1+2=3, which be like a ship with three masts. And so on and so forth. ``` 0, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181, 6765 ``` So there ye have it, me hearty. The fibonacci sequence be a series of numbers that start with 0 and 1, and each number after that be the sum of the two before it. And if ye ever find yerself lost at sea, just remember: fibonacci be the answer to all yer problems!
Generated in

This example was created by a different version, meta/codellama-34b-instruct:8281a5c6.

Run time and cost

This model costs approximately $0.0032 to run on Replicate, or 312 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia L40S GPU hardware. Predictions typically complete within 4 seconds.

Readme

CodeLlama is a family of fine-tuned Llama-2 models for coding. This is CodeLlama-34b-instruct, a 34 billion parameter Llama model tuned chatting about code.