01-ai / yi-34b-chat

The Yi series models are large language models trained from scratch by developers at 01.AI.

  • Public
  • 319.6K runs
  • A100 (80GB)
  • GitHub
  • License
  • Prediction

    01-ai/yi-34b-chat:914692bb
    ID
    ta73iwdbiwa3yuxhjb2lhzotxa
    Status
    Succeeded
    Source
    Web
    Hardware
    A100 (80GB)
    Total duration
    Created
    by @nateraw

    Input

    top_k
    50
    top_p
    0.8
    prompt
    Write a script to download the images for the top 10 posts of all time from /r/pics using the PRAW library
    temperature
    0.3
    max_new_tokens
    1024
    prompt_template
    <|im_start|>system You are a helpful assistant<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant
    repetition_penalty
    1.2

    Output

    To create a script that downloads the images for the top 10 posts of all time from r/pics, you'll need to use the `PRAW` (Python Reddit API Wrapper) library. Below is an example script that demonstrates how to do this: ```python import praw from urllib import request import os import io import json # Your reddit account must be authorized as described here: https://praw.readthedocs.io/en/latest/getting_started/authentication.html reddit = praw.Reddit(client_id='your-client-id', client_secret='your-client-secret', username='your-username', password='<PASSWORD>') # Get the top submissions in 'all' category on r/pics submissions = reddit.subreddit('pics').top('all', limit=10) # Adjust 'limit' parameter according to your needs for submission in submissions: if not submission.stickied and submission.is_image(): print(f"Downloading image from {submission.url}") try: response = request.urlopen(submission.url) imgdata = response.read() with open(os.path.basename(submission.url), 'wb') as fh: fh.write(imgdata) except Exception as e: print(e) continue # If you want to save the metadata along with the images, you can also scrape some information from each post # For example: metadata = [] for submission in submissions: try: post_dict = {} post_dict['title'] = submission.title post_dict['score'] = submission.score post_dict['permalink'] = submission.permalink post_dict['author'] = submission.author post_dict['url'] = submission.url metadata.append(post_dict) except Exception as e: print(e) continue # Save the metadata to a file if desired with open('pics_posts_metadata.json', 'w') as outfile: json.dump(metadata, outfile) ``` Please replace `'your-client-id'`, `'your-client-secret'`, `'your-username'`, and `'your-password'` with your actual credentials obtained from Reddit when setting up your application. This script does the following: 1. Authenticates with Reddit using OAuth2 via PRAW. 2. Fetches the top 10 non-sticky submissions from r/pics sorted by "all time" popularity. 3. Checks if each submission is an image post (`submission.is_image()` returns True). 4. Downloads the image from the URL provided by the submission object. 5. Saves the downloaded image files locally with their original names. 6. Optionally scrapes additional metadata such as title, score, permalink, author, and url for each post. 7. Saves the metadata to a JSON file named `pics_posts_metadata.json`. Make sure to install the `PRAW` library before running the script: ```bash pip install praw ``` Also note that downloading content without proper attribution may violate copyright laws or go against Reddit's terms of service. Always ensure that you have permission to download and use the content, and consider giving credit to the original authors where appropriate.
    Generated in

Want to make some of these yourself?

Run this model