With growing demands for privacy protection, security, and legal compliance (e.g., GDPR), machine unlearning has emerged as a critical technique for ensuring the controllability and regulatory alignment of machine learning models. However, a fundamental challenge in this field lies in effectively verifying whether unlearning operations have been successfully and thoroughly executed. Despite a growing body of work on unlearning techniques, verification methodologies remain comparatively underexplored and often fragmented. Existing approaches lack a unified taxonomy and a systematic framework for evaluation. To bridge this gap, this paper presents the first structured survey of machine unlearning verification methods. We propose a taxonomy that organizes current techniques into two principal categories -- behavioral verification and parametric verification -- based on the type of evidence used to assess unlearning fidelity. We examine representative methods within each category, analyze their underlying assumptions, strengths, and limitations, and identify potential vulnerabilities in practical deployment. In closing, we articulate a set of open problems in current verification research, aiming to provide a foundation for developing more robust, efficient, and theoretically grounded unlearning verification mechanisms.
View on arXiv@article{xue2025_2506.15115, title={ Towards Reliable Forgetting: A Survey on Machine Unlearning Verification, Challenges, and Future Directions }, author={ Lulu Xue and Shengshan Hu and Wei Lu and Yan Shen and Dongxu Li and Peijin Guo and Ziqi Zhou and Minghui Li and Yanjun Zhang and Leo Yu Zhang }, journal={arXiv preprint arXiv:2506.15115}, year={ 2025 } }