Run nateraw/codellama-13b-instruct using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run(
"nateraw/codellama-13b-instruct:4d4dfb567b910309c9501d56807864fc069ffcd2867552aea073c4b374eef309",
{
input: {
top_k: 50,
top_p: 0.95,
message: "Write a python function that reads an html file from the internet and extracts the text content of all the h1 elements",
temperature: 0.8,
system_prompt: "Provide answers in Python",
max_new_tokens: 1024
}
}
);
console.log(output);
Run nateraw/codellama-13b-instruct using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run(
"nateraw/codellama-13b-instruct:4d4dfb567b910309c9501d56807864fc069ffcd2867552aea073c4b374eef309",
input={
"top_k": 50,
"top_p": 0.95,
"message": "Write a python function that reads an html file from the internet and extracts the text content of all the h1 elements",
"temperature": 0.8,
"system_prompt": "Provide answers in Python",
"max_new_tokens": 1024
}
)
# The nateraw/codellama-13b-instruct model can stream output as it's running.# The predict method returns an iterator, and you can iterate over that output.for item in output:
# https://replicate.com/nateraw/codellama-13b-instruct/api#output-schemaprint(item, end="")
Run nateraw/codellama-13b-instruct using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \
-H "Authorization: Bearer $REPLICATE_API_TOKEN" \
-H "Content-Type: application/json" \
-H "Prefer: wait" \
-d $'{
"version": "4d4dfb567b910309c9501d56807864fc069ffcd2867552aea073c4b374eef309",
"input": {
"top_k": 50,
"top_p": 0.95,
"message": "Write a python function that reads an html file from the internet and extracts the text content of all the h1 elements",
"temperature": 0.8,
"system_prompt": "Provide answers in Python",
"max_new_tokens": 1024
}
}' \
https://api.replicate.com/v1/predictions
Run this to download the model and run it in your local environment:
cog predict r8.im/nateraw/codellama-13b-instruct@sha256:4d4dfb567b910309c9501d56807864fc069ffcd2867552aea073c4b374eef309 \
-i 'top_k=50' \
-i 'top_p=0.95' \
-i 'message="Write a python function that reads an html file from the internet and extracts the text content of all the h1 elements"' \
-i 'temperature=0.8' \
-i 'system_prompt="Provide answers in Python"' \
-i 'max_new_tokens=1024'
Run this to download the model and run it in your local environment:
docker run -d -p 5000:5000 --gpus=all r8.im/nateraw/codellama-13b-instruct@sha256:4d4dfb567b910309c9501d56807864fc069ffcd2867552aea073c4b374eef309curl -s -X POST \
-H "Content-Type: application/json" \
-d $'{
"input": {
"top_k": 50,
"top_p": 0.95,
"message": "Write a python function that reads an html file from the internet and extracts the text content of all the h1 elements",
"temperature": 0.8,
"system_prompt": "Provide answers in Python",
"max_new_tokens": 1024
}
}' \
http://localhost:5000/predictions
Here is an example of a Python function that reads an HTML file from the internet and extracts the text content of all the h1 elements:
```
import requests
from bs4 import BeautifulSoup
def get_h1_text(url):
response = requests.get(url)
soup = BeautifulSoup(response.content, 'html.parser')
h1_elements = soup.find_all('h1')
h1_text = []
for h1 in h1_elements:
h1_text.append(h1.text.strip())
return h1_text
```
This function uses the `requests` library to make an HTTP GET request to the specified URL, and the `BeautifulSoup` library to parse the HTML response. The function then uses the `find_all()` method of the `BeautifulSoup` object to find all the `h1` elements in the HTML document, and appends the text content of each `h1` element to a list called `h1_text`. Finally, the function returns the `h1_text` list.
Here is an example of how you could use this function:
```
url = 'https://www.example.com'
h1_text = get_h1_text(url)
print(h1_text)
```
This would print a list of all the text content of the `h1` elements in the HTML document at the specified URL.