ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.02162
35
12

Privately Learning Mixtures of Axis-Aligned Gaussians

3 June 2021
Ishaq Aden-Ali
H. Ashtiani
Christopher Liaw
    FedML
ArXivPDFHTML
Abstract

We consider the problem of learning mixtures of Gaussians under the constraint of approximate differential privacy. We prove that O~(k2dlog⁡3/2(1/δ)/α2ε)\widetilde{O}(k^2 d \log^{3/2}(1/\delta) / \alpha^2 \varepsilon)O(k2dlog3/2(1/δ)/α2ε) samples are sufficient to learn a mixture of kkk axis-aligned Gaussians in Rd\mathbb{R}^dRd to within total variation distance α\alphaα while satisfying (ε,δ)(\varepsilon, \delta)(ε,δ)-differential privacy. This is the first result for privately learning mixtures of unbounded axis-aligned (or even unbounded univariate) Gaussians. If the covariance matrices of each of the Gaussians is the identity matrix, we show that O~(kd/α2+kdlog⁡(1/δ)/αε)\widetilde{O}(kd/\alpha^2 + kd \log(1/\delta) / \alpha \varepsilon)O(kd/α2+kdlog(1/δ)/αε) samples are sufficient. Recently, the "local covering" technique of Bun, Kamath, Steinke, and Wu has been successfully used for privately learning high-dimensional Gaussians with a known covariance matrix and extended to privately learning general high-dimensional Gaussians by Aden-Ali, Ashtiani, and Kamath. Given these positive results, this approach has been proposed as a promising direction for privately learning mixtures of Gaussians. Unfortunately, we show that this is not possible. We design a new technique for privately learning mixture distributions. A class of distributions F\mathcal{F}F is said to be list-decodable if there is an algorithm that, given "heavily corrupted" samples from f∈Ff\in \mathcal{F}f∈F, outputs a list of distributions, F^\widehat{\mathcal{F}}F, such that one of the distributions in F^\widehat{\mathcal{F}}F approximates fff. We show that if F\mathcal{F}F is privately list-decodable, then we can privately learn mixtures of distributions in F\mathcal{F}F. Finally, we show axis-aligned Gaussian distributions are privately list-decodable, thereby proving mixtures of such distributions are privately learnable.

View on arXiv
Comments on this paper