ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.23802
51
0
v1v2 (latest)

MedHELM: Holistic Evaluation of Large Language Models for Medical Tasks

26 May 2025
Suhana Bedi
Hejie Cui
Miguel Fuentes
Alyssa Unell
Michael Wornow
Juan M. Banda
N. Kotecha
Timothy Keyes
Yifan Mai
Mert Oez
Hao Qiu
Shrey Jain
Leonardo Schettini
M. Kashyap
Jason Alan Fries
Akshay Swaminathan
Philip Chung
Fateme Nateghi
Asad Aali
Ashwin Nayak
Shivam Vedak
Sneha S. Jain
B. Patel
Oluseyi Fayanju
Shreya Shah
Ethan Goh
Dong-han Yao
Brian Soetikno
E. Reis
S. Gatidis
Vasu Divi
Robson Capasso
Rachna Saralkar
Chia-Chun Chiang
Jenelle A. Jindal
Tho Pham
Faraz Ghoddusi
Steven Lin
A. Chiou
Christy Hong
Mohana Roy
Michael F. Gensheimer
Hinesh Patel
Kevin Schulman
Dev Dash
Danton Char
L. Downing
Francois Grolleau
Kameron C. Black
Bethel Mieso
Aydin Zahedivash
Wen-wai Yim
Harshita Sharma
Tony Lee
Hannah Kirsch
Jennifer Lee
Nerissa Ambers
Carlene Lugtu
Aditya Sharma
Bilal Mawji
Alex Alekseyev
Vicky Zhou
Vikas Kakkar
Jarrod Helzer
Anurang Revri
Yair Bannett
R. Daneshjou
Jonathan Chen
Emily Alsentzer
Keith Morse
Nirmal Ravi
N. Aghaeepour
Vanessa Kennedy
Akshay Chaudhari
Thomas Wang
Sanmi Koyejo
M. Lungren
Eric Horvitz
Percy Liang
M. Pfeffer
N. Shah
    ELMLM&MAAI4MH
ArXiv (abs)PDFHTML
Main:26 Pages
6 Figures
Bibliography:2 Pages
5 Tables
Appendix:3 Pages
Abstract

While large language models (LLMs) achieve near-perfect scores on medical licensing exams, these evaluations inadequately reflect the complexity and diversity of real-world clinical practice. We introduce MedHELM, an extensible evaluation framework for assessing LLM performance for medical tasks with three key contributions. First, a clinician-validated taxonomy spanning 5 categories, 22 subcategories, and 121 tasks developed with 29 clinicians. Second, a comprehensive benchmark suite comprising 35 benchmarks (17 existing, 18 newly formulated) providing complete coverage of all categories and subcategories in the taxonomy. Third, a systematic comparison of LLMs with improved evaluation methods (using an LLM-jury) and a cost-performance analysis. Evaluation of 9 frontier LLMs, using the 35 benchmarks, revealed significant performance variation. Advanced reasoning models (DeepSeek R1: 66% win-rate; o3-mini: 64% win-rate) demonstrated superior performance, though Claude 3.5 Sonnet achieved comparable results at 40% lower estimated computational cost. On a normalized accuracy scale (0-1), most models performed strongly in Clinical Note Generation (0.73-0.85) and Patient Communication & Education (0.78-0.83), moderately in Medical Research Assistance (0.65-0.75), and generally lower in Clinical Decision Support (0.56-0.72) and Administration & Workflow (0.53-0.63). Our LLM-jury evaluation method achieved good agreement with clinician ratings (ICC = 0.47), surpassing both average clinician-clinician agreement (ICC = 0.43) and automated baselines including ROUGE-L (0.36) and BERTScore-F1 (0.44). Claude 3.5 Sonnet achieved comparable performance to top models at lower estimated cost. These findings highlight the importance of real-world, task-specific evaluation for medical use of LLMs and provides an open source framework to enable this.

View on arXiv
@article{bedi2025_2505.23802,
  title={ MedHELM: Holistic Evaluation of Large Language Models for Medical Tasks },
  author={ Suhana Bedi and Hejie Cui and Miguel Fuentes and Alyssa Unell and Michael Wornow and Juan M. Banda and Nikesh Kotecha and Timothy Keyes and Yifan Mai and Mert Oez and Hao Qiu and Shrey Jain and Leonardo Schettini and Mehr Kashyap and Jason Alan Fries and Akshay Swaminathan and Philip Chung and Fateme Nateghi and Asad Aali and Ashwin Nayak and Shivam Vedak and Sneha S. Jain and Birju Patel and Oluseyi Fayanju and Shreya Shah and Ethan Goh and Dong-han Yao and Brian Soetikno and Eduardo Reis and Sergios Gatidis and Vasu Divi and Robson Capasso and Rachna Saralkar and Chia-Chun Chiang and Jenelle Jindal and Tho Pham and Faraz Ghoddusi and Steven Lin and Albert S. Chiou and Christy Hong and Mohana Roy and Michael F. Gensheimer and Hinesh Patel and Kevin Schulman and Dev Dash and Danton Char and Lance Downing and Francois Grolleau and Kameron Black and Bethel Mieso and Aydin Zahedivash and Wen-wai Yim and Harshita Sharma and Tony Lee and Hannah Kirsch and Jennifer Lee and Nerissa Ambers and Carlene Lugtu and Aditya Sharma and Bilal Mawji and Alex Alekseyev and Vicky Zhou and Vikas Kakkar and Jarrod Helzer and Anurang Revri and Yair Bannett and Roxana Daneshjou and Jonathan Chen and Emily Alsentzer and Keith Morse and Nirmal Ravi and Nima Aghaeepour and Vanessa Kennedy and Akshay Chaudhari and Thomas Wang and Sanmi Koyejo and Matthew P. Lungren and Eric Horvitz and Percy Liang and Mike Pfeffer and Nigam H. Shah },
  journal={arXiv preprint arXiv:2505.23802},
  year={ 2025 }
}
Comments on this paper