ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.13009
15
0

Rectifying Privacy and Efficacy Measurements in Machine Unlearning: A New Inference Attack Perspective

16 June 2025
Nima Naderloui
Shenao Yan
Binghui Wang
Jie Fu
Wendy Hui Wang
Weiran Liu
Yuan Hong
    AAML
ArXiv (abs)PDFHTML
Main:14 Pages
9 Figures
Bibliography:3 Pages
11 Tables
Appendix:3 Pages
Abstract

Machine unlearning focuses on efficiently removing specific data from trained models, addressing privacy and compliance concerns with reasonable costs. Although exact unlearning ensures complete data removal equivalent to retraining, it is impractical for large-scale models, leading to growing interest in inexact unlearning methods. However, the lack of formal guarantees in these methods necessitates the need for robust evaluation frameworks to assess their privacy and effectiveness. In this work, we first identify several key pitfalls of the existing unlearning evaluation frameworks, e.g., focusing on average-case evaluation or targeting random samples for evaluation, incomplete comparisons with the retraining baseline. Then, we propose RULI (Rectified Unlearning Evaluation Framework via Likelihood Inference), a novel framework to address critical gaps in the evaluation of inexact unlearning methods. RULI introduces a dual-objective attack to measure both unlearning efficacy and privacy risks at a per-sample granularity. Our findings reveal significant vulnerabilities in state-of-the-art unlearning methods, where RULI achieves higher attack success rates, exposing privacy risks underestimated by existing methods. Built on a game-based foundation and validated through empirical evaluations on both image and text data (spanning tasks from classification to generation), RULI provides a rigorous, scalable, and fine-grained methodology for evaluating unlearning techniques.

View on arXiv
@article{naderloui2025_2506.13009,
  title={ Rectifying Privacy and Efficacy Measurements in Machine Unlearning: A New Inference Attack Perspective },
  author={ Nima Naderloui and Shenao Yan and Binghui Wang and Jie Fu and Wendy Hui Wang and Weiran Liu and Yuan Hong },
  journal={arXiv preprint arXiv:2506.13009},
  year={ 2025 }
}
Comments on this paper