Retrieval-Augmented Generation for Natural Language Processing: A Survey

Large language models (LLMs) have demonstrated great success in various fields, benefiting from their huge amount of parameters that store knowledge. However, LLMs still suffer from several key issues, such as hallucination problems, knowledge update issues, and lacking domain-specific expertise. The appearance of retrieval-augmented generation (RAG), which leverages an external knowledge database to augment LLMs, makes up those drawbacks of LLMs. This paper reviews all significant techniques of RAG, especially in the retriever and the retrieval fusions. Besides, tutorial codes are provided for implementing the representative techniques in RAG. This paper further discusses the RAG update, including RAG with/without knowledge update. Then, we introduce RAG evaluation and benchmarking, as well as the application of RAG in representative NLP tasks and industrial scenarios. Finally, this paper discusses RAG's future directions and challenges for promoting this field's development.
View on arXiv@article{wu2025_2407.13193, title={ Retrieval-Augmented Generation for Natural Language Processing: A Survey }, author={ Shangyu Wu and Ying Xiong and Yufei Cui and Haolun Wu and Can Chen and Ye Yuan and Lianming Huang and Xue Liu and Tei-Wei Kuo and Nan Guan and Chun Jason Xue }, journal={arXiv preprint arXiv:2407.13193}, year={ 2025 } }