32
2

On the Growth of Mistakes in Differentially Private Online Learning: A Lower Bound Perspective

Abstract

In this paper, we provide lower bounds for Differentially Private (DP) Online Learning algorithms. Our result shows that, for a broad class of (ε,δ)(\varepsilon,\delta)-DP online algorithms, for TT such that logTO(1/δ)\log T\leq O(1 / \delta), the expected number of mistakes incurred by the algorithm grows as Ω(logTδ)\Omega(\log \frac{T}{\delta}). This matches the upper bound obtained by Golowich and Livni (2021) and is in contrast to non-private online learning where the number of mistakes is independent of TT. To the best of our knowledge, our work is the first result towards settling lower bounds for DP-Online learning and partially addresses the open question in Sanyal and Ramponi (2022).

View on arXiv
Comments on this paper