ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.11555
25
0
v1v2 (latest)

RAG+: Enhancing Retrieval-Augmented Generation with Application-Aware Reasoning

13 June 2025
Yu Wang
Shiwan Zhao
Ming Fan
Zhihu Wang
Y. Zhang
Xicheng Zhang
Zhengfan Wang
Heyuan Huang
Ting Liu
    VLMLRM
ArXiv (abs)PDFHTML
Main:8 Pages
25 Figures
Bibliography:2 Pages
14 Tables
Appendix:14 Pages
Abstract

The integration of external knowledge through Retrieval-Augmented Generation (RAG) has become foundational in enhancing large language models (LLMs) for knowledge-intensive tasks. However, existing RAG paradigms often overlook the cognitive step of applying knowledge, leaving a gap between retrieved facts and task-specific reasoning. In this work, we introduce RAG+, a principled and modular extension that explicitly incorporates application-aware reasoning into the RAG pipeline. RAG+ constructs a dual corpus consisting of knowledge and aligned application examples, created either manually or automatically, and retrieves both jointly during inference. This design enables LLMs not only to access relevant information but also to apply it within structured, goal-oriented reasoning processes. Experiments across mathematical, legal, and medical domains, conducted on multiple models, demonstrate that RAG+ consistently outperforms standard RAG variants, achieving average improvements of 3-5%, and peak gains up to 7.5% in complex scenarios. By bridging retrieval with actionable application, RAG+ advances a more cognitively grounded framework for knowledge integration, representing a step toward more interpretable and capable LLMs.

View on arXiv
@article{wang2025_2506.11555,
  title={ RAG+: Enhancing Retrieval-Augmented Generation with Application-Aware Reasoning },
  author={ Yu Wang and Shiwan Zhao and Zhihu Wang and Yubo Zhang and Xicheng Zhang and Zhengfan Wang and Heyuan Huang and Ming Fan and Ting Liu },
  journal={arXiv preprint arXiv:2506.11555},
  year={ 2025 }
}
Comments on this paper