One Shot Dominance: Knowledge Poisoning Attack on Retrieval-Augmented Generation Systems

Large Language Models (LLMs) enhanced with Retrieval-Augmented Generation (RAG) have shown improved performance in generating accurate responses. However, the dependence on external knowledge bases introduces potential security vulnerabilities, particularly when these knowledge bases are publicly accessible and modifiable. While previous studies have exposed knowledge poisoning risks in RAG systems, existing attack methods suffer from critical limitations: they either require injecting multiple poisoned documents (resulting in poor stealthiness) or can only function effectively on simplistic queries (limiting real-world applicability). This paper reveals a more realistic knowledge poisoning attack against RAG systems that achieves successful attacks by poisoning only a single document while remaining effective for complex multi-hop questions involving complex relationships between multiple elements. Our proposed AuthChain address three challenges to ensure the poisoned documents are reliably retrieved and trusted by the LLM, even against large knowledge bases and LLM's own knowledge. Extensive experiments across six popular LLMs demonstrate that AuthChain achieves significantly higher attack success rates while maintaining superior stealthiness against RAG defense mechanisms compared to state-of-the-art baselines.
View on arXiv@article{chang2025_2505.11548, title={ One Shot Dominance: Knowledge Poisoning Attack on Retrieval-Augmented Generation Systems }, author={ Zhiyuan Chang and Mingyang Li and Xiaojun Jia and Junjie Wang and Yuekai Huang and Ziyou Jiang and Yang Liu and Qing Wang }, journal={arXiv preprint arXiv:2505.11548}, year={ 2025 } }