Dolly
This is a cog-replicate of Dolly trained following https://github.com/chenxwh/dolly, adapted from the original repo from databrickslabs https://github.com/databrickslabs/dolly
Below is the README from https://github.com/databrickslabs/dolly:
Databricks’ Dolly, a large language model trained on the Databricks Machine Learning Platform, demonstrates that a two-years-old open source model (GPT-J) can, when subjected to just 30 minutes of fine tuning on a focused corpus of 50k records (Stanford Alpaca), exhibit surprisingly high quality instruction following behavior not characteristic of the foundation model on which it is based. We believe this finding is important because it demonstrates that the ability to create powerful artificial intelligence technologies is vastly more accessible than previously realized.
Databricks is committed to ensuring that every organization and individual benefits from the transformative power of artificial intelligence. The Dolly model family represents our first steps along this journey, and we’re excited to share this technology with the world.
Please note that while GPT-J 6B is Apache 2.0 licensed, the Alpaca dataset is licensed under Creative Commons NonCommercial (CC BY-NC 4.0).
Dolly is intended exclusively for research purposes and is not licensed for commercial use.
Model Overview
In the following passages we refer to dolly-6b
, the first in the Dolly family of models and the model that this repository presently implements.
dolly-6b
is a 6 billion parameter causal language model created by Databricks that is derived from EleutherAI’s GPT-J (released June 2021) and fine-tuned on a ~52K record instruction corpus (Stanford Alpaca) consisting of question/answer pairs generated using the techniques outlined in the Self-Instruct paper. Dolly was trained using deepspeed ZeRO 3 on the Databricks Machine Learning Platform in just 30 minutes using a single NDasrA100_v4 machine with 8x A100 40GB GPUs.
Like its base model, dolly-6b has six billion parameters consisting of 28 transformer layers with 16 attention heads each. It employs Rotary Position Embedding (RoPE) and shares the same tokenizer as GPT-3. GPT-J was trained on The Pile, a 400B token dataset of diverse documents designed primarily for text generation tasks.
Limitations
dolly-6b
is intended exclusively for research purposes and is not licensed for commercial use.
dolly-6b
is not a state-of-the-art generative language model and, though quantitative benchmarking is ongoing, is not intended to perform competitively with more modern model architectures or models subject to larger pretraining corpuses. For example, we expect the Alpaca model, derived from LLaMA-7B (trained on 1T tokens vs. The Pile’s 400B & with years of scientific advances behind it), to be superior in its generative quality relative to Dolly. What’s most notable about Dolly is the degree of its instruction following capabilities given that it’s based on a freely available open source model anyone can download and use.
The Dolly model family is under active development, and so any list of shortcomings is unlikely to be exhaustive, but we include known limitations and misfires here as a means to document and share our preliminary findings with the community. In particular, dolly-6b
struggles with syntactically complex prompts, mathematical operations, factual errors, dates and times, open-ended question answering, hallucination, enumerating lists of specific length, and stylistic mimicry.