GainRAG: Preference Alignment in Retrieval-Augmented Generation through Gain Signal Synthesis

The Retrieval-Augmented Generation (RAG) framework introduces a retrieval module to dynamically inject retrieved information into the input context of large language models (LLMs), and has demonstrated significant success in various NLP tasks. However, the current study points out that there is a preference gap between retrievers and LLMs in the RAG framework, which limit the further improvement of system performance. Some highly relevant passages may interfere with LLM reasoning because they contain complex or contradictory information; while some indirectly related or even inaccurate content may help LLM generate more accurate answers by providing suggestive information or logical clues. To solve this, we propose GainRAG, a novel approach that aligns the retriever's and LLM's preferences by defining a new metric, "gain", which measure how well an input passage contributes to correct outputs. Specifically, we propose a method to estimate these gain signals and train a middleware that aligns the preferences of the retriever and the LLM using only limited data. In addition, we introduce a pseudo-passage strategy to mitigate degradation. The experimental results on 6 datasets verify the effectiveness of GainRAG.
View on arXiv@article{jiang2025_2505.18710, title={ GainRAG: Preference Alignment in Retrieval-Augmented Generation through Gain Signal Synthesis }, author={ Yi Jiang and Sendong Zhao and Jianbo Li and Haochun Wang and Bing Qin }, journal={arXiv preprint arXiv:2505.18710}, year={ 2025 } }