Machine unlearning techniques aim to mitigate unintended memorization in large language models (LLMs). However, existing approaches predominantly focus on the explicit removal of isolated facts, often overlooking latent inferential dependencies and the non-deterministic nature of knowledge within LLMs. Consequently, facts presumed forgotten may persist implicitly through correlated information. To address these challenges, we propose a knowledge unlearning evaluation framework that more accurately captures the implicit structure of real-world knowledge by representing relevant factual contexts as knowledge graphs with associated confidence scores. We further develop an inference-based evaluation protocol leveraging powerful LLMs as judges; these judges reason over the extracted knowledge subgraph to determine unlearning success. Our LLM judges utilize carefully designed prompts and are calibrated against human evaluations to ensure their trustworthiness and stability. Extensive experiments on our newly constructed benchmark demonstrate that our framework provides a more realistic and rigorous assessment of unlearning performance. Moreover, our findings reveal that current evaluation strategies tend to overestimate unlearning effectiveness. Our code is publicly available atthis https URL.
View on arXiv@article{wei2025_2506.05735, title={ Do LLMs Really Forget? Evaluating Unlearning with Knowledge Correlation and Confidence Awareness }, author={ Rongzhe Wei and Peizhi Niu and Hans Hao-Hsun Hsu and Ruihan Wu and Haoteng Yin and Mohsen Ghassemi and Yifan Li and Vamsi K. Potluru and Eli Chien and Kamalika Chaudhuri and Olgica Milenkovic and Pan Li }, journal={arXiv preprint arXiv:2506.05735}, year={ 2025 } }