ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.13329
23
48

Covariance-Aware Private Mean Estimation Without Private Covariance Estimation

24 June 2021
Gavin Brown
Marco Gaboardi
Adam D. Smith
Jonathan R. Ullman
Lydia Zakynthinou
    FedML
ArXivPDFHTML
Abstract

We present two sample-efficient differentially private mean estimators for ddd-dimensional (sub)Gaussian distributions with unknown covariance. Informally, given n≳d/α2n \gtrsim d/\alpha^2n≳d/α2 samples from such a distribution with mean μ\muμ and covariance Σ\SigmaΣ, our estimators output μ~\tilde\muμ~​ such that ∥μ~−μ∥Σ≤α\| \tilde\mu - \mu \|_{\Sigma} \leq \alpha∥μ~​−μ∥Σ​≤α, where ∥⋅∥Σ\| \cdot \|_{\Sigma}∥⋅∥Σ​ is the Mahalanobis distance. All previous estimators with the same guarantee either require strong a priori bounds on the covariance matrix or require Ω(d3/2)\Omega(d^{3/2})Ω(d3/2) samples. Each of our estimators is based on a simple, general approach to designing differentially private mechanisms, but with novel technical steps to make the estimator private and sample-efficient. Our first estimator samples a point with approximately maximum Tukey depth using the exponential mechanism, but restricted to the set of points of large Tukey depth. Its accuracy guarantees hold even for data sets that have a small amount of adversarial corruption. Proving that this mechanism is private requires a novel analysis. Our second estimator perturbs the empirical mean of the data set with noise calibrated to the empirical covariance, without releasing the covariance itself. Its sample complexity guarantees hold more generally for subgaussian distributions, albeit with a slightly worse dependence on the privacy parameter. For both estimators, careful preprocessing of the data is required to satisfy differential privacy.

View on arXiv
Comments on this paper