moinnadeem / codellama-34b-instruct-vllm

  • Public
  • 78 runs
Iterate in playground

Input

*string
Shift + Return to add a new line

Prompt to send to the model.

string
Shift + Return to add a new line

System prompt to send to the model. This is prepended to the prompt and helps guide system behavior.

Default: "You are a helpful assistant."

integer
(minimum: 1)

Maximum number of tokens to generate. A word is generally 2-3 tokens

Default: 128

integer
(minimum: -1)

Minimum number of tokens to generate. To disable, set to -1. A word is generally 2-3 tokens.

Default: -1

number
(minimum: 0.01, maximum: 5)

Adjusts randomness of outputs, greater than 1 is random and 0 is deterministic, 0.75 is a good starting value.

Default: 0.75

number
(minimum: 0, maximum: 1)

When decoding text, samples from the top p percentage of most likely tokens; lower to ignore less likely tokens

Default: 0.9

integer
(minimum: 0)

When decoding text, samples from the top k most likely tokens; lower to ignore less likely tokens

Default: 50

string
Shift + Return to add a new line

A comma-separated list of sequences to stop generation at. For example, '<end>,<stop>' will stop generation at the first instance of 'end' or '<stop>'.

integer

Random seed. Leave blank to randomize the seed

boolean

provide debugging output in logs

Default: false

Output

``` import requests from bs4 import BeautifulSoup def download_h1_text(url): response = requests.get(url) soup = BeautifulSoup(response.content, 'html.parser') h1_elements = soup.find_all('h1') text = [element.text for element in h1_elements] return text ```
Generated in

This output was created using a different version of the model, moinnadeem/codellama-34b-instruct-vllm:c97256d5.

Run time and cost

This model costs approximately $0.027 to run on Replicate, or 37 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia A100 (80GB) GPU hardware. Predictions typically complete within 19 seconds. The predict time for this model varies significantly based on the inputs.

Readme

This model doesn't have a readme.