Readme
MedGemma 4B IT (Non-Clinical Use Only)
Description
This model is based on MedGemma (Health AI Developer Foundations) released by Google.
It is provided strictly for research, educational, demonstration, and non-clinical purposes.
⚠️ IMPORTANT USE RESTRICTIONS
- This model is NOT intended for clinical use
-
This model must NOT be used for:
-
medical diagnosis
- treatment planning
- clinical decision-making
- patient care
- or any other form of healthcare service
- Outputs generated by this model must not be relied upon as medical advice
Users are solely responsible for ensuring that their use of this model complies with all applicable laws, regulations, and professional standards in their jurisdiction.
Intended Use
✅ Allowed Use Cases
- AI research and experimentation
- Education and training (AI, NLP, LLM evaluation)
- Non-clinical medical text understanding demonstrations
- Benchmarking and performance evaluation
- Prompt engineering experiments
❌ Prohibited Use Cases
- Clinical diagnosis or treatment
- Medical device functionality
- Supporting real-world patient care
- Regulatory, billing, or medical decision support systems
Legal & Compliance Notice
HAI-DEF is provided under and subject to the Health AI Developer Foundations Terms of Use found at:
https://developers.google.com/health-ai-developer-foundations/terms
Use of this model is also subject to the Health AI Developer Foundations Prohibited Use Policy.
By using this model, you acknowledge and agree to these terms.
Model Overview (Informational Only)
MedGemma is a collection of Gemma 3 variants trained for performance on medical text and image comprehension. Developers may use MedGemma to accelerate healthcare-adjacent AI research and experimentation, subject to non-clinical restrictions.
Available Variants
- MedGemma 4B: Multimodal, available in pre-trained (-pt) and instruction-tuned (-it) versions
- MedGemma 27B: Available in text-only and multimodal instruction-tuned variants
The instruction-tuned (-it) version is generally recommended as a starting point for most non-clinical applications.
Training Summary
- Multimodal variants utilize a SigLIP-based image encoder pre-trained on de-identified medical images
- LLM components are trained on diverse medical text corpora and question–answer datasets
- All training data is de-identified and used to illustrate baseline capabilities only
Important Clarification
Although MedGemma variants have been evaluated on clinically relevant benchmarks, this hosted model is NOT validated, approved, or authorized for clinical or diagnostic use.
For image-only medical research tasks without text generation, the MedSigLIP image encoder may be more appropriate.
Please consult the MedGemma Technical Report for detailed technical and evaluation information.
Disclaimer
The model and its outputs are provided “AS IS”, without warranties of any kind.
This model does NOT provide medical advice, and no doctor–patient relationship is created through its use.