35
13

User-Level Private Learning via Correlated Sampling

Abstract

Most works in learning with differential privacy (DP) have focused on the setting where each user has a single sample. In this work, we consider the setting where each user holds mm samples and the privacy protection is enforced at the level of each user's data. We show that, in this setting, we may learn with a much fewer number of users. Specifically, we show that, as long as each user receives sufficiently many samples, we can learn any privately learnable class via an (ϵ,δ)(\epsilon, \delta)-DP algorithm using only O(log(1/δ)/ϵ)O(\log(1/\delta)/\epsilon) users. For ϵ\epsilon-DP algorithms, we show that we can learn using only Oϵ(d)O_{\epsilon}(d) users even in the local model, where dd is the probabilistic representation dimension. In both cases, we show a nearly-matching lower bound on the number of users required. A crucial component of our results is a generalization of global stability [Bun et al., FOCS 2020] that allows the use of public randomness. Under this relaxed notion, we employ a correlated sampling strategy to show that the global stability can be boosted to be arbitrarily close to one, at a polynomial expense in the number of samples.

View on arXiv
Comments on this paper