ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2210.09126
33
26

Verifiable and Provably Secure Machine Unlearning

17 October 2022
Thorsten Eisenhofer
Doreen Riepel
Varun Chandrasekaran
Esha Ghosh
O. Ohrimenko
Nicolas Papernot
    AAML
    MU
ArXivPDFHTML
Abstract

Machine unlearning aims to remove points from the training dataset of a machine learning model after training: e.g., when a user requests their data to be deleted. While many unlearning methods have been proposed, none of them enable users to audit the procedure. Furthermore, recent work shows a user is unable to verify whether their data was unlearnt from an inspection of the model parameter alone. Rather than reasoning about parameters, we propose to view verifiable unlearning as a security problem. To this end, we present the first cryptographic definition of verifiable unlearning to formally capture the guarantees of an unlearning system. In this framework, the server first computes a proof that the model was trained on a dataset D. Given a user's data point d requested to be deleted, the server updates the model using an unlearning algorithm. It then provides a proof of the correct execution of unlearning and that d is not part of D', where D' is the new training dataset (i.e., d has been removed). Our framework is generally applicable to different unlearning techniques that we abstract as admissible functions. We instantiate a protocol in the framework, based on cryptographic assumptions, using SNARKs and hash chains. Finally, we implement the protocol for three different unlearning techniques and validate its feasibility for linear regression, logistic regression, and neural networks.

View on arXiv
@article{eisenhofer2025_2210.09126,
  title={ Verifiable and Provably Secure Machine Unlearning },
  author={ Thorsten Eisenhofer and Doreen Riepel and Varun Chandrasekaran and Esha Ghosh and Olga Ohrimenko and Nicolas Papernot },
  journal={arXiv preprint arXiv:2210.09126},
  year={ 2025 }
}
Comments on this paper