lucataco / phi-3-mini-128k-instruct

Phi-3-Mini-128K-Instruct is a 3.8 billion-parameter, lightweight, state-of-the-art open model trained using the Phi-3 datasets

  • Public
  • 4.7K runs
  • GitHub
  • License



Run time and cost

This model runs on Nvidia A40 (Large) GPU hardware. Predictions typically complete within 52 seconds. The predict time for this model varies significantly based on the inputs.


Model Summary

The Phi-3-Mini-128K-Instruct is a 3.8 billion-parameter, lightweight, state-of-the-art open model trained using the Phi-3 datasets. This dataset includes both synthetic data and filtered publicly available website data, with an emphasis on high-quality and reasoning-dense properties. The model belongs to the Phi-3 family with the Mini version in two variants 4K and 128K which is the context length (in tokens) that it can support.

After initial training, the model underwent a post-training process that involved supervised fine-tuning and direct preference optimization to enhance its ability to follow instructions and adhere to safety measures. When evaluated against benchmarks that test common sense, language understanding, mathematics, coding, long-term context, and logical reasoning, the Phi-3 Mini-128K-Instruct demonstrated robust and state-of-the-art performance among models with fewer than 13 billion parameters. Resources and Technical Documentation:

Intended Uses

Primary use cases

The model is intended for commercial and research use in English. The model provides uses for applications which require:

1) Memory/compute constrained environments 2) Latency bound scenarios 3) Strong reasoning (especially code, math and logic)

Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features.

Use case considerations

Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case.

Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.

Responsible AI Considerations

Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:

  • Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English.
  • Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.
  • Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case.
  • Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.
  • Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as “typing, math, random, collections, datetime, itertools”. If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.

Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include:

  • Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.
  • High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.
  • Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).
  • Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.
  • Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.



  • Architecture: Phi-3 Mini-128K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines.
  • Inputs: Text. It is best suited for prompts using chat format.
  • Context length: 128K tokens
  • GPUs: 512 H100-80G
  • Training time: 7 days
  • Training data: 3.3T tokens
  • Outputs: Generated text in response to the input
  • Dates: Our models were trained between February and April 2024
  • Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models.


Our training data includes a wide variety of sources, totaling 3.3 trillion tokens, and is a combination of 1) Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code; 2) Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.); 3) High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.


A basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided here.


We report the results for Phi-3-Mini-128K-Instruct on standard open-source benchmarks measuring the model’s reasoning ability (both common sense reasoning and logical reasoning). We compare to Phi-2, Mistral-7b-v0.1, Mixtral-8x7b, Gemma 7B, Llama-3-8B-Instruct, and GPT-3.5.

All the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation.

As is now standard, we use few-shot prompts to evaluate the models, at temperature 0. The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3. More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model.

The number of k–shot examples is listed per-benchmark.

7b (preview)
14b (preview)
version 1106
68.1 75.3 78.2 56.3 61.7 63.6 66.5 68.4 71.4
74.5 78.7 83.2 53.6 58.5 49.8 71.1 70.4 78.8
52.8 55.0 58.7 42.5 47.1 48.7 57.3 55.2 58.1
0-Shot; CoT
83.6 86.4 90.8 61.1 46.4 59.8 77.4 64.7 78.1
55.3 58.2 69.8 40.9 49.6 50.0 60.5 62.2 63.4
36.9 45.0 49.7 29.8 35.1 42.1 42.0 45.2 48.4
57.1 59.1 73.3 45.2 72.3 75.2 67.7 82.2 85.8
84.0 90.7 91.9 75.9 78.6 78.3 82.8 87.3 87.4
95.2 97.1 98.0 88.5 90.6 91.4 93.4 95.6 96.3
83.6 87.8 88.2 60.2 77.7 78.1 75.7 86.0 86.6
76.1 79.0 79.4 68.3 74.6 65.5 73.9 75.9 68.3
71.5 75.0 82.5 59.4 57.3 59.6 51.5 69.7 68.32
72.5 82.5 81.2 54.7 54.2 55.6 65.0 62.0 68.8
80.6 88.4 86.6 73.6 79.8 78.6 82.6 85.8 86.0
78.7 82.9 86.5 72.2 66.0 80.9 77.6 79.1
78.0 80.3 82.6 69.3 72.6 76.2 79 78.1 79.6
63.2 68.1 74.8 52.1 53.0 63.2 60.1 85.8
57.9 59.1 54.7 47.0 28.0 34.1 60.4 37.8 62.2
62.5 71.4 73.7 60.6 50.8 51.5 67.7 60.2 77.8



Note that by default, the Phi-3-mini model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types: * NVIDIA A100 * NVIDIA A6000 * NVIDIA H100

If you want to run the model on: * NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from_pretrained() with attn_implementation=”eager” * Optimized inference on GPU, CPU, and Mobile: use the ONNX models 128K

Cross Platform Support

ONNX runtime ecosystem now supports Phi-3 Mini models across platforms and hardware. You can find the optimized Phi-3 Mini-128K-Instruct ONNX model here.

Optimized Phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML support lets developers bring hardware acceleration to Windows devices at scale across AMD, Intel, and NVIDIA GPUs.
Along with DirectML, ONNX Runtime provides cross platform support for Phi-3 across a range of devices CPU, GPU, and mobile.

Here are some of the optimized configurations we have added:

  1. ONNX models for int4 DML: Quantized to int4 via AWQ
  2. ONNX model for fp16 CUDA
  3. ONNX model for int4 CUDA: Quantized to int4 via RTN
  4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN


The model is licensed under the MIT license.


This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft’s Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.