titocosta / meditron

Meditron-7B-v1.0 from Meditron's open-source suite of medical LLMs.

  • Public
  • 1.7K runs
  • GitHub
  • Paper

Run time and cost

This model costs approximately $0.47 to run on Replicate, or 2 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia A40 (Large) GPU hardware. Predictions typically complete within 11 minutes. The predict time for this model varies significantly based on the inputs.

Readme

Meditron-7B-v1.0 from Meditron’s open-source suite of medical LLMs. Supports streaming.

Model card below, github at https://github.com/epfLLM/meditron

Meditron is a suite of open-source medical Large Language Models (LLMs). Meditron-7B is a 7 billion parameters model adapted to the medical domain from Llama-2-7B through continued pretraining on a comprehensively curated medical corpus, including selected PubMed articles, abstracts, a new dataset of internationally-recognized medical guidelines, and general domain data from RedPajama-v1. Meditron-7B, finetuned on relevant training data, outperforms Llama-2-7B and PMC-Llama on multiple medical reasoning tasks.

Advisory Notice While Meditron is designed to encode medical knowledge from sources of high-quality evidence, it is not yet adapted to deliver this knowledge appropriately, safely, or within professional actionable constraints. We recommend against deploying Meditron in medical applications without extensive use-case alignment, as well as additional testing, specifically including randomized controlled trials in real-world practice settings. Model Details Developed by: EPFL LLM Team Model type: Causal decoder-only transformer language model Language(s): English (mainly) Model License: LLAMA 2 COMMUNITY LICENSE AGREEMENT Code License: APACHE 2.0 LICENSE Continue-pretrained from model: Llama-2-7B Context length: 2K tokens Input: Text-only data Output: Model generates text only Status: This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we enhance model’s performance. Knowledge Cutoff: August 2023 Model Sources Repository: epflLLM/meditron Trainer: epflLLM/Megatron-LLM Paper: MediTron-70B: Scaling Medical Pretraining for Large Language Models Uses Meditron-7B is being made available for further testing and assessment as an AI assistant to enhance clinical decision-making and enhance access to an LLM for healthcare use. Potential use cases may include but are not limited to:

Medical exam question answering Supporting differential diagnosis Disease information (symptoms, cause, treatment) query General health information query