587

Bringing Differential Private SGD to Practice: On the Independence of Gaussian Noise and the Number of Training Rounds

International Conference on Machine Learning (ICML), 2021
Abstract

In the context of DP-SGD each round communicates a local SGD update which leaks some new information about the underlying local data set to the outside world. In order to provide privacy, Gaussian noise is added to local SGD updates. However, privacy leakage still aggregates over multiple training rounds. Therefore, in order to control privacy leakage over an increasing number of training rounds, we need to increase the added Gaussian noise per local SGD update. This dependence of the amount of Gaussian noise σ\sigma on the number of training rounds TT may impose an impractical upper bound on TT (because σ\sigma cannot be too large) leading to a low accuracy global model (because the global model receives too few local SGD updates). DP-SGD much less competitive compared to other existing privacy techniques. We show for the first time that for (ϵ,δ)(\epsilon,\delta)-differential privacy σ\sigma can be chosen equal to 2(ϵ+ln(1/δ))/ϵ\sqrt{2(\epsilon +\ln(1/\delta))/\epsilon} regardless the total number of training rounds TT. In other words, σ\sigma does not depend on TT anymore (and aggregation of privacy leakage increases to a limit). This important discovery brings DP-SGD to practice because σ\sigma can remain small to make the trained model have high accuracy even for large TT as usually happens in practice.

View on arXiv
Comments on this paper