ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.23513
34
1

RARE: Retrieval-Augmented Reasoning Modeling

30 March 2025
Zhengren Wang
Jiayang Yu
Dongsheng Ma
Z. Chen
Yu Wang
Z. Li
Feiyu Xiong
Yanfeng Wang
W. Elwasif
Linpeng Tang
W. Zhang
    RALM
    LRM
ArXivPDFHTML
Abstract

Domain-specific intelligence demands specialized knowledge and sophisticated reasoning for problem-solving, posing significant challenges for large language models (LLMs) that struggle with knowledge hallucination and inadequate reasoning capabilities under constrained parameter budgets. Inspired by Bloom's Taxonomy in educational theory, we propose Retrieval-Augmented Reasoning Modeling (RARE), a novel paradigm that decouples knowledge storage from reasoning optimization. RARE externalizes domain knowledge to retrievable sources and internalizes domain-specific reasoning patterns during training. Specifically, by injecting retrieved knowledge into training prompts, RARE transforms learning objectives from rote memorization to contextualized reasoning application. It enables models to bypass parameter-intensive memorization and prioritize the development of higher-order cognitive processes. Our experiments demonstrate that lightweight RARE-trained models (e.g., Llama-3.1-8B) could achieve state-of-the-art performance, surpassing retrieval-augmented GPT-4 and Deepseek-R1 distilled counterparts. RARE establishes a paradigm shift where maintainable external knowledge bases synergize with compact, reasoning-optimized models, collectively driving more scalable domain-specific intelligence. Repo:this https URL

View on arXiv
@article{wang2025_2503.23513,
  title={ RARE: Retrieval-Augmented Reasoning Modeling },
  author={ Zhengren Wang and Jiayang Yu and Dongsheng Ma and Zhe Chen and Yu Wang and Zhiyu Li and Feiyu Xiong and Yanfeng Wang and Weinan E and Linpeng Tang and Wentao Zhang },
  journal={arXiv preprint arXiv:2503.23513},
  year={ 2025 }
}
Comments on this paper