ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2401.08396
26
36

Hidden Flaws Behind Expert-Level Accuracy of GPT-4 Vision in Medicine

16 January 2024
Qiao Jin
Fangyuan Chen
Yiliang Zhou
Ziyang Xu
Justin M. Cheung
Robert Chen
Ronald M. Summers
Justin F. Rousseau
Peiyun Ni
Marc J. Landsman
Sally L. Baxter
S. Al’Aref
Yijia Li
Alex Chen
Josef A. Brejt
Michael F. Chiang
Yifan Peng
Zhiyong Lu
    ELM
    MedIm
    LM&MA
ArXivPDFHTML
Abstract

Recent studies indicate that Generative Pre-trained Transformer 4 with Vision (GPT-4V) outperforms human physicians in medical challenge tasks. However, these evaluations primarily focused on the accuracy of multi-choice questions alone. Our study extends the current scope by conducting a comprehensive analysis of GPT-4V's rationales of image comprehension, recall of medical knowledge, and step-by-step multimodal reasoning when solving New England Journal of Medicine (NEJM) Image Challenges - an imaging quiz designed to test the knowledge and diagnostic capabilities of medical professionals. Evaluation results confirmed that GPT-4V performs comparatively to human physicians regarding multi-choice accuracy (81.6% vs. 77.8%). GPT-4V also performs well in cases where physicians incorrectly answer, with over 78% accuracy. However, we discovered that GPT-4V frequently presents flawed rationales in cases where it makes the correct final choices (35.5%), most prominent in image comprehension (27.2%). Regardless of GPT-4V's high accuracy in multi-choice questions, our findings emphasize the necessity for further in-depth evaluations of its rationales before integrating such multimodal AI models into clinical workflows.

View on arXiv
Comments on this paper