Run nateraw/codellama-7b-instruct using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run(
"nateraw/codellama-7b-instruct:6a9502976b5d020c6dd421d5790d988cd77013b642ea1276669550603a81989f",
{
input: {
top_k: 50,
top_p: 0.95,
message: "Write a python function that reads an html file from the internet and extracts the text content of all the h1 elements",
temperature: 0.8,
system_prompt: "Provide answers in Python",
max_new_tokens: 1024
}
}
);
console.log(output);
Run nateraw/codellama-7b-instruct using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run(
"nateraw/codellama-7b-instruct:6a9502976b5d020c6dd421d5790d988cd77013b642ea1276669550603a81989f",
input={
"top_k": 50,
"top_p": 0.95,
"message": "Write a python function that reads an html file from the internet and extracts the text content of all the h1 elements",
"temperature": 0.8,
"system_prompt": "Provide answers in Python",
"max_new_tokens": 1024
}
)
# The nateraw/codellama-7b-instruct model can stream output as it's running.# The predict method returns an iterator, and you can iterate over that output.for item in output:
# https://replicate.com/nateraw/codellama-7b-instruct/api#output-schemaprint(item, end="")
Run nateraw/codellama-7b-instruct using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \
-H "Authorization: Bearer $REPLICATE_API_TOKEN" \
-H "Content-Type: application/json" \
-H "Prefer: wait" \
-d $'{
"version": "nateraw/codellama-7b-instruct:6a9502976b5d020c6dd421d5790d988cd77013b642ea1276669550603a81989f",
"input": {
"top_k": 50,
"top_p": 0.95,
"message": "Write a python function that reads an html file from the internet and extracts the text content of all the h1 elements",
"temperature": 0.8,
"system_prompt": "Provide answers in Python",
"max_new_tokens": 1024
}
}' \
https://api.replicate.com/v1/predictions
Here is an example of how you might do this in Python:
```
import requests
from bs4 import BeautifulSoup
def get_h1_text(url):
response = requests.get(url)
soup = BeautifulSoup(response.content, 'html.parser')
h1_texts = []
for h1 in soup.find_all('h1'):
h1_texts.append(h1.text.strip())
return h1_texts
```
This function takes in a URL as a string argument and returns a list of strings, where each string is the text content of an h1 element in the HTML file at that URL.
You can use this function like this:
```
h1_texts = get_h1_text('https://www.example.com')
print(h1_texts)
```
This will print out a list of all the h1 elements in the HTML file at the given URL, along with their text content.
Note: In this example, we are using the `requests` and `BeautifulSoup` libraries to retrieve the HTML content of the URL and parse it with Beautiful Soup. You will need to install these libraries in your Python environment before running this code.