43
0

Online Learning and Unlearning

Abstract

We formalize the problem of online learning-unlearning, where a model is updated sequentially in an online setting while accommodating unlearning requests between updates. After a data point is unlearned, all subsequent outputs must be statistically indistinguishable from those of a model trained without that point. We present two online learner-unlearner (OLU) algorithms, both built upon online gradient descent (OGD). The first, passive OLU, leverages OGD's contractive property and injects noise when unlearning occurs, incurring no additional computation. The second, active OLU, uses an offline unlearning algorithm that shifts the model toward a solution excluding the deleted data. Under standard convexity and smoothness assumptions, both methods achieve regret bounds comparable to those of standard OGD, demonstrating that one can maintain competitive regret bounds while providing unlearning guarantees.

View on arXiv
@article{hu2025_2505.08557,
  title={ Online Learning and Unlearning },
  author={ Yaxi Hu and Bernhard Schölkopf and Amartya Sanyal },
  journal={arXiv preprint arXiv:2505.08557},
  year={ 2025 }
}
Comments on this paper