31
0

Divide-Then-Align: Honest Alignment based on the Knowledge Boundary of RAG

Main:9 Pages
5 Figures
Bibliography:4 Pages
10 Tables
Appendix:7 Pages
Abstract

Large language models (LLMs) augmented with retrieval systems have significantly advanced natural language processing tasks by integrating external knowledge sources, enabling more accurate and contextually rich responses. To improve the robustness of such systems against noisy retrievals, Retrieval-Augmented Fine-Tuning (RAFT) has emerged as a widely adopted method. However, RAFT conditions models to generate answers even in the absence of reliable knowledge. This behavior undermines their reliability in high-stakes domains, where acknowledging uncertainty is critical. To address this issue, we propose Divide-Then-Align (DTA), a post-training approach designed to endow RAG systems with the ability to respond with "I don't know" when the query is out of the knowledge boundary of both the retrieved passages and the model's internal knowledge. DTA divides data samples into four knowledge quadrants and constructs tailored preference data for each quadrant, resulting in a curated dataset for Direct Preference Optimization (DPO). Experimental results on three benchmark datasets demonstrate that DTA effectively balances accuracy with appropriate abstention, enhancing the reliability and trustworthiness of retrieval-augmented systems.

View on arXiv
@article{sun2025_2505.20871,
  title={ Divide-Then-Align: Honest Alignment based on the Knowledge Boundary of RAG },
  author={ Xin Sun and Jianan Xie and Zhongqi Chen and Qiang Liu and Shu Wu and Yuehe Chen and Bowen Song and Weiqiang Wang and Zilei Wang and Liang Wang },
  journal={arXiv preprint arXiv:2505.20871},
  year={ 2025 }
}
Comments on this paper