19
0

Context-Guided Dynamic Retrieval for Improving Generation Quality in RAG Models

Abstract

This paper focuses on the dynamic optimization of the Retrieval-Augmented Generation (RAG) architecture. It proposes a state-aware dynamic knowledge retrieval mechanism to enhance semantic understanding and knowledge scheduling efficiency in large language models for open-domain question answering and complex generation tasks. The method introduces a multi-level perceptive retrieval vector construction strategy and a differentiable document matching path. These components enable end-to-end joint training and collaborative optimization of the retrieval and generation modules. This effectively addresses the limitations of static RAG structures in context adaptation and knowledge access. Experiments are conducted on the Natural Questions dataset. The proposed structure is thoroughly evaluated across different large models, including GPT-4, GPT-4o, and DeepSeek. Comparative and ablation experiments from multiple perspectives confirm the significant improvements in BLEU and ROUGE-L scores. The approach also demonstrates stronger robustness and generation consistency in tasks involving semantic ambiguity and multi-document fusion. These results highlight its broad application potential and practical value in building high-quality language generation systems.

View on arXiv
@article{he2025_2504.19436,
  title={ Context-Guided Dynamic Retrieval for Improving Generation Quality in RAG Models },
  author={ Jacky He and Guiran Liu and Binrong Zhu and Hanlu Zhang and Hongye Zheng and Xiaokai Wang },
  journal={arXiv preprint arXiv:2504.19436},
  year={ 2025 }
}
Comments on this paper