ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.14798
43
0

Verifying Robust Unlearning: Probing Residual Knowledge in Unlearned Models

21 April 2025
Hao Xuan
Xingyu Li
    AAML
    MU
ArXivPDFHTML
Abstract

Machine Unlearning (MUL) is crucial for privacy protection and content regulation, yet recent studies reveal that traces of forgotten information persist in unlearned models, enabling adversaries to resurface removed knowledge. Existing verification methods only confirm whether unlearning was executed, failing to detect such residual information leaks. To address this, we introduce the concept of Robust Unlearning, ensuring models are indistinguishable from retraining and resistant to adversarial recovery. To empirically evaluate whether unlearning techniques meet this security standard, we propose the Unlearning Mapping Attack (UMA), a post-unlearning verification framework that actively probes models for forgotten traces using adversarial queries. Extensive experiments on discriminative and generative tasks show that existing unlearning techniques remain vulnerable, even when passing existing verification metrics. By establishing UMA as a practical verification tool, this study sets a new standard for assessing and enhancing machine unlearning security.

View on arXiv
@article{xuan2025_2504.14798,
  title={ Verifying Robust Unlearning: Probing Residual Knowledge in Unlearned Models },
  author={ Hao Xuan and Xingyu Li },
  journal={arXiv preprint arXiv:2504.14798},
  year={ 2025 }
}
Comments on this paper