ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.05386
17
0

Beyond RAG: Reinforced Reasoning Augmented Generation for Clinical Notes

3 June 2025
Lo Pang-Yun Ting
Chengshuai Zhao
Yu-Hua Zeng
Yuan Jee Lim
Kun-Ta Chuang
    RALMLRM
ArXiv (abs)PDFHTML
Main:9 Pages
3 Figures
Bibliography:3 Pages
5 Tables
Appendix:1 Pages
Abstract

Clinical note generation aims to automatically produce free-text summaries of a patient's condition and diagnostic process, with discharge instructions being a representative long-form example. While recent large language model (LLM)-based methods pre-trained on general clinical corpora show promise in clinical text generation, they fall short in producing long-form notes from limited patient information. In this paper, we propose R2AG, the first reinforced retriever for long-form discharge instruction generation based on pre-admission data. R2AG is trained with reinforcement learning to retrieve reasoning paths from a medical knowledge graph, providing explicit semantic guidance to the LLM. To bridge the information gap, we propose Group-Based Retriever Optimization (GRO) which improves retrieval quality with group-relative rewards, encouraging reasoning leaps for deeper inference by the LLM. Comprehensive experiments on the MIMIC-IV-Note dataset show that R2AG outperforms baselines in both clinical efficacy and natural language generation metrics. Further analysis reveals that R2AG fills semantic gaps in sparse input scenarios, and retrieved reasoning paths help LLMs avoid clinical misinterpretation by focusing on key evidence and following coherent reasoning.

View on arXiv
@article{ting2025_2506.05386,
  title={ Beyond RAG: Reinforced Reasoning Augmented Generation for Clinical Notes },
  author={ Lo Pang-Yun Ting and Chengshuai Zhao and Yu-Hua Zeng and Yuan Jee Lim and Kun-Ta Chuang },
  journal={arXiv preprint arXiv:2506.05386},
  year={ 2025 }
}
Comments on this paper