nwhitehead / llama2-70b-oasst-sft-v10

This model is the Open-Assistant fine-tuning of Meta's Llama2 70B LLM.

  • Public
  • 2 runs
  • A100 (80GB)
  • License
Iterate in playground

Input

string
Shift + Return to add a new line

Text prompt for the model

Default: "USER: Hello, who are you?\nASSISTANT:"

number
(minimum: 0.01, maximum: 2)

Temperature of the output, it's best to keep it below 1

Default: 0.5

number
(minimum: 0.01, maximum: 1)

Top cumulative probability to filter candidates

Default: 1

integer
(minimum: 1, maximum: 100)

Number of top candidates to keep

Default: 20

number
(minimum: 1, maximum: 1.5)

Penalty for repeated tokens in the model's output

Default: 1

integer
(minimum: 1, maximum: 4096)

Maximum tokens to generate

Default: 50

integer
(minimum: 0, maximum: 4096)

Minimum tokens to generate

Default: 1

integer
(minimum: -2147483648, maximum: 2147483647)

Seed for reproducibility, -1 for random seed

Default: -1

Output

No output yet! Press "Submit" to start a prediction.

Run time and cost

This model runs on Nvidia A100 (80GB) GPU hardware. We don't yet have enough runs of this model to provide performance information.

Readme

This model doesn't have a readme.