43
0

One Patient, Many Contexts: Scaling Medical AI Through Contextual Intelligence

Main:22 Pages
2 Figures
1 Tables
Abstract

Medical foundation models, including language models trained on clinical notes, vision-language models on medical images, and multimodal models on electronic health records, can summarize clinical notes, answer medical questions, and assist in decision-making. Adapting these models to new populations, specialties, or settings typically requires fine-tuning, careful prompting, or retrieval from knowledge bases. This can be impractical, and limits their ability to interpret unfamiliar inputs and adjust to clinical situations not represented during training. As a result, models are prone to contextual errors, where predictions appear reasonable but fail to account for critical patient-specific or contextual information. These errors stem from a fundamental limitation that current models struggle with: dynamically adjusting their behavior across evolving contexts of medical care. In this Perspective, we outline a vision for context-switching in medical AI: models that dynamically adapt their reasoning without retraining to new specialties, populations, workflows, and clinical roles. We envision context-switching AI to diagnose, manage, and treat a wide range of diseases across specialties and regions, and expand access to medical care.

View on arXiv
@article{li2025_2506.10157,
  title={ One Patient, Many Contexts: Scaling Medical AI Through Contextual Intelligence },
  author={ Michelle M. Li and Ben Y. Reis and Adam Rodman and Tianxi Cai and Noa Dagan and Ran D. Balicer and Joseph Loscalzo and Isaac S. Kohane and Marinka Zitnik },
  journal={arXiv preprint arXiv:2506.10157},
  year={ 2025 }
}
Comments on this paper