joehoover / bart-large-mnli

Zero-shot classification document classification with a light-weight model.

  • Public
  • 49 runs
  • GitHub
  • License

Input

Output

Run time and cost

This model runs on Nvidia T4 GPU hardware.

Readme

This model performs zero-shot document classification for short documents.

It spins up an instance of the HuggingFace Transformers zero-shot-classification pipeline with a large BART model that’s been fine-tuned on NLI. It’s an old approach (see here) as far as zero shot inference is concerned, but it’s also small and fast, compared to today’s large models.

Model description

This system is a pipeline that uses a BART Large model that’s been fine-tuned on MNLI, a large natural language inference dataset.

The pipeline formulates sequence classification as an NLI problem. Given a set of class labels, an input sequence, and a hypothesis template:

  1. A hypothesis is constructed for each label by piping labels into the hypothesis template.
  2. Each hypothesis is appended to the user input, yielding the complete input sequence
  3. The input sequence is passed to the NLI model, which then predicts whether the hypothesis contradicts, entails, or is irrelevant to the user input (i.e. the premise).
  4. Entailment logits associated with each hypothesis–remember, a hypothesis is constructed for each label–are then normalized with a softmax to yield the final scores over labels that are returned in the output.

Outputs are returned as a dictionary with three keys:

  • hypothesis_template: The hypothesis template used to operationalize the full input sequence.
  • labels: Class labels specified by the user ordered by scores
  • scores: Scores associated with each label
  • sequence: Input sequence used for classification

Intended use

Prototyping, low-cost zero-shot document classification.

Ethical considerations

This model contains social and cultural biases that may impact predictions. It is also not particularly accurate or well-calibrated. It should not be used in production without serious consideration of these risks.

Caveats and recommendations

For best results, use labels and a hypothesis template that are congruent with each other. Note, also, that the model is not robust to changes in surface forms, so changing characteristics like punctuation and capitalization may change accuracy (for better or worse!).