nateraw / codellama-7b-instruct

  • Public
  • 20 runs
  • L40S
Iterate in playground

Input

*string
Shift + Return to add a new line
string
Shift + Return to add a new line

The system prompt to use (for chat/instruct models only)

Default: "Provide answers in Python"

integer

The maximum number of tokens the model should generate as output.

Default: 256

number

The value used to modulate the next token probabilities.

Default: 0.2

number

A probability threshold for generating the output. If < 1.0, only keep the top tokens with cumulative probability >= top_p (nucleus filtering). Nucleus filtering is described in Holtzman et al. (http://arxiv.org/abs/1904.09751).

Default: 0.9

integer

The number of highest probability tokens to consider for generating the output. If > 0, only keep the top k tokens with highest probability (top-k filtering).

Default: 50

Output

Here is an example of how you might do this in Python: ``` import requests from bs4 import BeautifulSoup def get_h1_text(url): response = requests.get(url) soup = BeautifulSoup(response.content, 'html.parser') h1_texts = [] for h1 in soup.find_all('h1'): h1_texts.append(h1.text.strip()) return h1_texts ``` This function takes in a URL as a string argument and returns a list of strings, where each string is the text content of an h1 element in the HTML file at that URL. You can use this function like this: ``` h1_texts = get_h1_text('https://www.example.com') print(h1_texts) ``` This will print out a list of all the h1 elements in the HTML file at the given URL, along with their text content. Note: In this example, we are using the `requests` and `BeautifulSoup` libraries to retrieve the HTML content of the URL and parse it with Beautiful Soup. You will need to install these libraries in your Python environment before running this code.
Generated in

Run time and cost

This model runs on Nvidia L40S GPU hardware. We don't yet have enough runs of this model to provide performance information.

Readme

This model doesn't have a readme.