replicate / dolly-v2-12b

An open source instruction-tuned large language model developed by Databricks

This model has no enabled versions.

Readme

dolly-v2-12b is a GPT-style large language model that has been trained to follow human instructions. Based on EleutherAI’s pythia-12b model, dolly-v2-12b was trained by Databricks on the Databricks databricks-dolly-15k dataset, which they developed and released with the model. The training data includes examples from capability domains reported in the InstructGPT paper, such as summarization, classification, information extraction, brainstorming, closed and open QA, and generation.

Model description

dolly-v2-12b is a fine-tuned version of EleutherAI’s pythia-12b, a GPT-style causal language model trained on the Pile. See here for more information about the model architecture. dolly-v2-12b was developed by fine-tuning the pythia-12b checkpoint on roughly 15,000 instruction examples that were generated by Databricks employees and released under a permissive license (CC-BY-SA).

Intended use

This is an experimental model that has been trained to follow human instructions.

Ethical considerations

This model is not designed to avoid harmful or undesirable behavior and its output should not be unconditionally trusted in contexts where there are risks or costs of inaccuracy.

Caveats and recommendations

This model struggles with a range of operations (see here); however, it nonetheless performs surprisingly well across many contexts and tasks. As you experiment with the model, you should explore the effects of different prompt formats.