Large Language Models (LLMs) have shown promise in character imitation, enabling immersive and engaging conversations. However, they often generate content that is irrelevant or inconsistent with a character's background. We attribute these failures to: (1) the inability to accurately recall character-specific knowledge due to entity ambiguity, and (2) a lack of awareness of the character's cognitive boundaries. To address these issues, we propose RoleRAG, a retrieval-based framework that integrates efficient entity disambiguation for knowledge indexing with a boundary-aware retriever for extracting contextually appropriate information from a structured knowledge graph. Experiments on role-playing benchmarks show that RoleRAG's calibrated retrieval helps both general-purpose and role-specific LLMs better align with character knowledge and reduce hallucinated responses.
View on arXiv@article{wang2025_2505.18541, title={ RoleRAG: Enhancing LLM Role-Playing via Graph Guided Retrieval }, author={ Yongjie Wang and Jonathan Leung and Zhiqi Shen }, journal={arXiv preprint arXiv:2505.18541}, year={ 2025 } }