lucataco / dolphin-2.2.1-mistral-7b

Mistral-7B-v0.1 fine tuned for chat with the Dolphin dataset (an open-source implementation of Microsoft's Orca)

  • Public
  • 29K runs
  • GitHub
  • Paper
  • License

Input

Output

Run time and cost

This model runs on Nvidia A40 GPU hardware. Predictions typically complete within 5 seconds. The predict time for this model varies significantly based on the inputs.

Readme

About

Dolphin 2.2.1 🐬 https://erichartford.com/dolphin

Dolphin-2.1-mistral-7b’s training was sponsored by a16z.

This is a checkpoint release, to fix overfit training. ie, it was responding with CoT even when it was not requested, and also it was too compliant even when the request made no sense. This one should be better.