nateraw / codellama-13b-instruct

  • Public
  • 8 runs
  • L40S

Input

*string
Shift + Return to add a new line
string
Shift + Return to add a new line

The system prompt to use (for chat/instruct models only)

Default: "Provide answers in Python"

integer

The maximum number of tokens the model should generate as output.

Default: 256

number

The value used to modulate the next token probabilities.

Default: 0.2

number

A probability threshold for generating the output. If < 1.0, only keep the top tokens with cumulative probability >= top_p (nucleus filtering). Nucleus filtering is described in Holtzman et al. (http://arxiv.org/abs/1904.09751).

Default: 0.9

integer

The number of highest probability tokens to consider for generating the output. If > 0, only keep the top k tokens with highest probability (top-k filtering).

Default: 50

Output

Here is an example of a Python function that reads an HTML file from the internet and extracts the text content of all the h1 elements: ``` import requests from bs4 import BeautifulSoup def get_h1_text(url): response = requests.get(url) soup = BeautifulSoup(response.content, 'html.parser') h1_elements = soup.find_all('h1') h1_text = [] for h1 in h1_elements: h1_text.append(h1.text.strip()) return h1_text ``` This function uses the `requests` library to make an HTTP GET request to the specified URL, and the `BeautifulSoup` library to parse the HTML response. The function then uses the `find_all()` method of the `BeautifulSoup` object to find all the `h1` elements in the HTML document, and appends the text content of each `h1` element to a list called `h1_text`. Finally, the function returns the `h1_text` list. Here is an example of how you could use this function: ``` url = 'https://www.example.com' h1_text = get_h1_text(url) print(h1_text) ``` This would print a list of all the text content of the `h1` elements in the HTML document at the specified URL.
Generated in

Run time and cost

This model runs on Nvidia L40S GPU hardware. We don't yet have enough runs of this model to provide performance information.

Readme

This model doesn't have a readme.