ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.07400
45
0
v1v2 (latest)

MedChat: A Multi-Agent Framework for Multimodal Diagnosis with Large Language Models

9 June 2025
Philip R. Liu
Sparsh Bansal
Jimmy Dinh
Aditya Pawar
Ramani Satishkumar
Shail Desai
Neeraj Gupta
X. Wang
S. Hu
    LM&MA
ArXiv (abs)PDFHTML
Main:6 Pages
6 Figures
Bibliography:1 Pages
Abstract

The integration of deep learning-based glaucoma detection with large language models (LLMs) presents an automated strategy to mitigate ophthalmologist shortages and improve clinical reporting efficiency. However, applying general LLMs to medical imaging remains challenging due to hallucinations, limited interpretability, and insufficient domain-specific medical knowledge, which can potentially reduce clinical accuracy. Although recent approaches combining imaging models with LLM reasoning have improved reporting, they typically rely on a single generalist agent, restricting their capacity to emulate the diverse and complex reasoning found in multidisciplinary medical teams. To address these limitations, we propose MedChat, a multi-agent diagnostic framework and platform that combines specialized vision models with multiple role-specific LLM agents, all coordinated by a director agent. This design enhances reliability, reduces hallucination risk, and enables interactive diagnostic reporting through an interface tailored for clinical review and educational use. Code available atthis https URL.

View on arXiv
@article{liu2025_2506.07400,
  title={ MedChat: A Multi-Agent Framework for Multimodal Diagnosis with Large Language Models },
  author={ Philip R. Liu and Sparsh Bansal and Jimmy Dinh and Aditya Pawar and Ramani Satishkumar and Shail Desai and Neeraj Gupta and Xin Wang and Shu Hu },
  journal={arXiv preprint arXiv:2506.07400},
  year={ 2025 }
}
Comments on this paper