19
0

Leveraging Per-Instance Privacy for Machine Unlearning

Abstract

We present a principled, per-instance approach to quantifying the difficulty of unlearning via fine-tuning. We begin by sharpening an analysis of noisy gradient descent for unlearning (Chien et al., 2024), obtaining a better utility-unlearning tradeoff by replacing worst-case privacy loss bounds with per-instance privacy losses (Thudi et al., 2024), each of which bounds the (Renyi) divergence to retraining without an individual data point. To demonstrate the practical applicability of our theory, we present empirical results showing that our theoretical predictions are born out both for Stochastic Gradient Langevin Dynamics (SGLD) as well as for standard fine-tuning without explicit noise. We further demonstrate that per-instance privacy losses correlate well with several existing data difficulty metrics, while also identifying harder groups of data points, and introduce novel evaluation methods based on loss barriers. All together, our findings provide a foundation for more efficient and adaptive unlearning strategies tailored to the unique properties of individual data points.

View on arXiv
@article{sepahvand2025_2505.18786,
  title={ Leveraging Per-Instance Privacy for Machine Unlearning },
  author={ Nazanin Mohammadi Sepahvand and Anvith Thudi and Berivan Isik and Ashmita Bhattacharyya and Nicolas Papernot and Eleni Triantafillou and Daniel M. Roy and Gintare Karolina Dziugaite },
  journal={arXiv preprint arXiv:2505.18786},
  year={ 2025 }
}
Comments on this paper