BRIDGE: Benchmarking Large Language Models for Understanding Real-world Clinical Practice Text

Large language models (LLMs) hold great promise for medical applications and are evolving rapidly, with new models being released at an accelerated pace. However, current evaluations of LLMs in clinical contexts remain limited. Most existing benchmarks rely on medical exam-style questions or PubMed-derived text, failing to capture the complexity of real-world electronic health record (EHR) data. Others focus narrowly on specific application scenarios, limiting their generalizability across broader clinical use. To address this gap, we present BRIDGE, a comprehensive multilingual benchmark comprising 87 tasks sourced from real-world clinical data sources across nine languages. We systematically evaluated 52 state-of-the-art LLMs (including DeepSeek-R1, GPT-4o, Gemini, and Llama 4) under various inference strategies. With a total of 13,572 experiments, our results reveal substantial performance variation across model sizes, languages, natural language processing tasks, and clinical specialties. Notably, we demonstrate that open-source LLMs can achieve performance comparable to proprietary models, while medically fine-tuned LLMs based on older architectures often underperform versus updated general-purpose models. The BRIDGE and its corresponding leaderboard serve as a foundational resource and a unique reference for the development and evaluation of new LLMs in real-world clinical text understanding.
View on arXiv@article{wu2025_2504.19467, title={ BRIDGE: Benchmarking Large Language Models for Understanding Real-world Clinical Practice Text }, author={ Jiageng Wu and Bowen Gu and Ren Zhou and Kevin Xie and Doug Snyder and Yixing Jiang and Valentina Carducci and Richard Wyss and Rishi J Desai and Emily Alsentzer and Leo Anthony Celi and Adam Rodman and Sebastian Schneeweiss and Jonathan H. Chen and Santiago Romero-Brufau and Kueiyu Joshua Lin and Jie Yang }, journal={arXiv preprint arXiv:2504.19467}, year={ 2025 } }