Revealing and Mitigating the Challenge of Detecting Character Knowledge Errors in LLM Role-Playing

Large language model (LLM) role-playing has gained widespread attention. Authentic character knowledge is crucial for constructing realistic LLM role-playing agents. However, existing works usually overlook the exploration of LLMs' ability to detect characters' known knowledge errors (KKE) and unknown knowledge errors (UKE) while playing roles, which would lead to low-quality automatic construction of character trainable corpus. In this paper, we propose RoleKE-Bench to evaluate LLMs' ability to detect errors in KKE and UKE. The results indicate that even the latest LLMs struggle to detect these two types of errors effectively, especially when it comes to familiar knowledge. We experimented with various reasoning strategies and propose an agent-based reasoning method, Self-Recollection and Self-Doubt (SRD), to explore further the potential for improving error detection capabilities. Experiments show that our method effectively improves the LLMs' ability to detect error character knowledge, but it remains an issue that requires ongoing attention.
View on arXiv@article{zhang2025_2409.11726, title={ Revealing and Mitigating the Challenge of Detecting Character Knowledge Errors in LLM Role-Playing }, author={ Wenyuan Zhang and Shuaiyi Nie and Jiawei Sheng and Zefeng Zhang and Xinghua Zhang and Yongquan He and Tingwen Liu }, journal={arXiv preprint arXiv:2409.11726}, year={ 2025 } }