69
4
v1v2 (latest)

Revealing and Mitigating the Challenge of Detecting Character Knowledge Errors in LLM Role-Playing

Main:8 Pages
6 Figures
Bibliography:3 Pages
20 Tables
Appendix:14 Pages
Abstract

Large language model (LLM) role-playing has gained widespread attention. Authentic character knowledge is crucial for constructing realistic LLM role-playing agents. However, existing works usually overlook the exploration of LLMs' ability to detect characters' known knowledge errors (KKE) and unknown knowledge errors (UKE) while playing roles, which would lead to low-quality automatic construction of character trainable corpus. In this paper, we propose RoleKE-Bench to evaluate LLMs' ability to detect errors in KKE and UKE. The results indicate that even the latest LLMs struggle to detect these two types of errors effectively, especially when it comes to familiar knowledge. We experimented with various reasoning strategies and propose an agent-based reasoning method, Self-Recollection and Self-Doubt (S2^2RD), to explore further the potential for improving error detection capabilities. Experiments show that our method effectively improves the LLMs' ability to detect error character knowledge, but it remains an issue that requires ongoing attention.

View on arXiv
@article{zhang2025_2409.11726,
  title={ Revealing and Mitigating the Challenge of Detecting Character Knowledge Errors in LLM Role-Playing },
  author={ Wenyuan Zhang and Shuaiyi Nie and Jiawei Sheng and Zefeng Zhang and Xinghua Zhang and Yongquan He and Tingwen Liu },
  journal={arXiv preprint arXiv:2409.11726},
  year={ 2025 }
}
Comments on this paper

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.