ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.22353
159
1

RuleRAG: Rule-Guided Retrieval-Augmented Generation with Language Models for Question Answering

15 October 2024
Zhongwu Chen
Chengjin Xu
Dingmin Wang
Zhen Huang
Yong Dou
Xuhui Jiang
Jian Guo
    RALM
ArXivPDFHTML
Abstract

Retrieval-augmented generation (RAG) has shown promising potential in knowledge intensive question answering (QA). However, existing approaches only consider the query itself, neither specifying the retrieval preferences for the retrievers nor informing the generators of how to refer to the retrieved documents for the answers, which poses a significant challenge to the QA performance. To address these issues, we propose Rule-guided Retrieval-Augmented Generation with LMs, which explicitly introduces rules for in-context learning (RuleRAG-ICL) to guide retrievers to recall related documents in the directions of rules and uniformly guide generators to reason attributed by the same rules. Moreover, most existing RAG datasets were constructed without considering rules and Knowledge Graphs (KGs) are recognized as providing high-quality rules. Therefore, we construct five rule-aware RAG benchmarks for QA, RuleQA, based on KGs to stress the significance of retrieval and reasoning with rules. Experiments on RuleQA demonstrate RuleRAG-ICL improves the retrieval quality of +89.2% in Recall@10 and answer accuracy of +103.1% in Exact Match, and RuleRAG-FT yields more enhancement. In addition, experiments on four existing RAG datasets show RuleRAG is also effective by offering rules in RuleQA to them, further proving the generalization of rule guidance in RuleRAG.

View on arXiv
@article{chen2025_2410.22353,
  title={ RuleRAG: Rule-Guided Retrieval-Augmented Generation with Language Models for Question Answering },
  author={ Zhongwu Chen and Chengjin Xu and Dingmin Wang and Zhen Huang and Yong Dou and Xuhui Jiang and Jian Guo },
  journal={arXiv preprint arXiv:2410.22353},
  year={ 2025 }
}
Comments on this paper