01-ai / yi-6b-chat

The Yi series models are large language models trained from scratch by developers at 01.AI.

  • Public
  • 6.4K runs
  • L40S
  • GitHub
  • License

Input

*string
Shift + Return to add a new line
integer

The maximum number of tokens the model should generate as output.

Default: 1024

number

The value used to modulate the next token probabilities.

Default: 0.3

number

A probability threshold for generating the output. If < 1.0, only keep the top tokens with cumulative probability >= top_p (nucleus filtering). Nucleus filtering is described in Holtzman et al. (http://arxiv.org/abs/1904.09751).

Default: 0.8

integer

The number of highest probability tokens to consider for generating the output. If > 0, only keep the top k tokens with highest probability (top-k filtering).

Default: 50

number

Repetition penalty

Default: 1.2

string
Shift + Return to add a new line

The template used to format the prompt. The input prompt is inserted into the template using the `{prompt}` placeholder.

Default: "<|im_start|>system\nYou are a helpful assistant<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant\n"

Output

Certainly! Below is a Python script that uses the `praw` (Python Reddit API Wrapper) library to fetch the top 10 posts with images from the `/r/pics` subreddit. Please replace `API_KEY` and `SUBREDDIT` with your own API key and subdomain respectively before running this code: ```python import praw from tqdm import tqdm # For progress bars def get_top_posts(subreddit, limit=10): """Fetches the latest 'limit' number of submissions from a reddit subscription or search query. """ client = praw.Reddit('your-api-key') search_result = client.get_submissions(subreddit='YourSubredditNameHere', sort='new', limit=limit) return [submission.url for submission in search_result] if __name__ == "__main__": API_KEY = '' # Replace with your actual api key SUBREDDIT = r'/r/pics' # Replace with your actual subdomain images = [] for post in tqdm(get_top_posts(SUBREDDIT))[:10]: try: image = praw.models.linkpreview.LinkPreview(post['data']['preview']) images.append({ 'title': image.title, 'permalink': post['data']['permalink'], 'thumbnail': image.thumbnail_src, 'source': f'{image.host}{image.display_sources[0]}'}) except KeyError as e: print(f'\n{e} - Skipping Post {post["id"]}') with open('/path/to/output.json', 'w') as outfile: json.dump([{'imgUrl': img['source'], 'altText': img['title']} for img in images], outfile) ``` This script will output an array of dictionaries containing information about each image, including its source URL, title, and thumbnail preview. The file will be saved as JSON format at `/path/to/output.json`. You can adjust the `limit` parameter if you want more than ten results returned. Remember to change `YOUR_API_KEY` and `YOUR_SUBDOMAIN` to match your personal settings.
Generated in

Run time and cost

This model costs approximately $0.0033 to run on Replicate, or 303 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia L40S GPU hardware. Predictions typically complete within 4 seconds.

Readme

See the full model card here. The model served here are the original, un-quantized weights.

NOTE: As per the license, replicate was granted permission to share the model here.