On the Growth of Mistakes in Differentially Private Online Learning: A Lower Bound Perspective

Abstract
In this paper, we provide lower bounds for Differentially Private (DP) Online Learning algorithms. Our result shows that, for a broad class of -DP online algorithms, for such that , the expected number of mistakes incurred by the algorithm grows as . This matches the upper bound obtained by Golowich and Livni (2021) and is in contrast to non-private online learning where the number of mistakes is independent of . To the best of our knowledge, our work is the first result towards settling lower bounds for DP-Online learning and partially addresses the open question in Sanyal and Ramponi (2022).
View on arXivComments on this paper