77

Variance-Reduced (ε,δ)(\varepsilon,δ)-Unlearning using Forget Set Gradients

Martin Van Waerebeke
Marco Lorenzi
Kevin Scaman
El Mahdi El Mhamdi
Giovanni Neglia
Main:8 Pages
4 Figures
Bibliography:4 Pages
5 Tables
Appendix:5 Pages
Abstract

In machine unlearning, (ε,δ)(\varepsilon,\delta)-unlearning is a popular framework that provides formal guarantees on the effectiveness of the removal of a subset of training data, the forget set, from a trained model. For strongly convex objectives, existing first-order methods achieve (ε,δ)(\varepsilon,\delta)-unlearning, but they only use the forget set to calibrate injected noise, never as a direct optimization signal. In contrast, efficient empirical heuristics often exploit the forget samples (e.g., via gradient ascent) but come with no formal unlearning guarantees. We bridge this gap by presenting the Variance-Reduced Unlearning (VRU) algorithm. To the best of our knowledge, VRU is the first first-order algorithm that directly includes forget set gradients in its update rule, while provably satisfying ((ε,δ)(\varepsilon,\delta)-unlearning. We establish the convergence of VRU and show that incorporating the forget set yields strictly improved rates, i.e. a better dependence on the achieved error compared to existing first-order (ε,δ)(\varepsilon,\delta)-unlearning methods. Moreover, we prove that, in a low-error regime, VRU asymptotically outperforms any first-order method that ignores the forgetthis http URLcorroborate our theory, showing consistent gains over both state-of-the-art certified unlearning methods and over empirical baselines that explicitly leverage the forget set.

View on arXiv
Comments on this paper