titocosta / meditron-70b-awq

Meditron-70B-v1.0 from Meditron's open-source suite of medical LLMs, quantized with AWQ.

  • Public
  • 139 runs
  • GitHub
  • Paper

Meditron is a suite of open-source medical Large Language Models (LLMs).

We release Meditron-7B and Meditron-70B, which are adapted to the medical domain from Llama-2 through continued pretraining on a comprehensively curated medical corpus, including selected PubMed papers and abstracts, a new dataset of internationally-recognized medical guidelines, and a general domain corpus.

Meditron-70B, finetuned on relevant data, outperforms Llama-2-70B, GPT-3.5 and Flan-PaLM on multiple medical reasoning tasks.

Advisory Notice While Meditron is designed to encode medical knowledge from sources of high-quality evidence, it is not yet adapted to deliver this knowledge appropriately, safely, or within professional actionable constraints. We recommend against using Meditron in medical applications without extensive use-case alignment, as well as additional testing, specifically including randomized controlled trials in real-world practice settings. Model Details Developed by: EPFL LLM Team Model type: Causal decoder-only transformer language model Language(s): English (mainly) Model License: LLAMA 2 COMMUNITY LICENSE AGREEMENT Code License: APACHE 2.0 LICENSE Continue-pretrained from model: Llama-2-70B Context length: 4k tokens Input: Text only data Output: Model generates text only Status: This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we enhance model’s performance. Knowledge Cutoff: August 2023 Trainer: epflLLM/Megatron-LLM Paper: Meditron-70B: Scaling Medical Pretraining for Large Language Models