01-ai / yi-6b

The Yi series models are large language models trained from scratch by developers at 01.AI.

  • Public
  • 161.1K runs
  • L40S
  • GitHub
  • License
Run with an API

Input

*string
Shift + Return to add a new line
integer

The maximum number of tokens the model should generate as output.

Default: 512

number

The value used to modulate the next token probabilities.

Default: 0.8

number

A probability threshold for generating the output. If < 1.0, only keep the top tokens with cumulative probability >= top_p (nucleus filtering). Nucleus filtering is described in Holtzman et al. (http://arxiv.org/abs/1904.09751).

Default: 0.95

integer

The number of highest probability tokens to consider for generating the output. If > 0, only keep the top k tokens with highest probability (top-k filtering).

Default: 50

number

Presence penalty

Default: 0

number

Frequency penalty

Default: 0

string
Shift + Return to add a new line

The template used to format the prompt. The input prompt is inserted into the template using the `{prompt}` placeholder.

Default: "{prompt}"

Output

After eating one apple, I had 2 apples left. (H) - (A) = A
Generated in

Run time and cost

This model costs approximately $0.0042 to run on Replicate, or 238 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia L40S GPU hardware. Predictions typically complete within 5 seconds.

Readme

See the full model card here

The model served here is the AWQ quantized version from here. Thank you to @TheBloke for sharing this model!

NOTE: As per the license, replicate was granted permission to share the model here.