ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1903.07870
6
36

How Hard Is Robust Mean Estimation?

19 March 2019
Samuel B. Hopkins
Jerry Li
ArXivPDFHTML
Abstract

Robust mean estimation is the problem of estimating the mean μ∈Rd\mu \in \mathbb{R}^dμ∈Rd of a ddd-dimensional distribution DDD from a list of independent samples, an ϵ\epsilonϵ-fraction of which have been arbitrarily corrupted by a malicious adversary. Recent algorithmic progress has resulted in the first polynomial-time algorithms which achieve \emph{dimension-independent} rates of error: for instance, if DDD has covariance III, in polynomial-time one may find μ^\hat{\mu}μ^​ with ∥μ−μ^∥≤O(ϵ)\|\mu - \hat{\mu}\| \leq O(\sqrt{\epsilon})∥μ−μ^​∥≤O(ϵ​). However, error rates achieved by current polynomial-time algorithms, while dimension-independent, are sub-optimal in many natural settings, such as when DDD is sub-Gaussian, or has bounded 444-th moments. In this work we give worst-case complexity-theoretic evidence that improving on the error rates of current polynomial-time algorithms for robust mean estimation may be computationally intractable in natural settings. We show that several natural approaches to improving error rates of current polynomial-time robust mean estimation algorithms would imply efficient algorithms for the small-set expansion problem, refuting Raghavendra and Steurer's small-set expansion hypothesis (so long as P≠NPP \neq NPP=NP). We also give the first direct reduction to the robust mean estimation problem, starting from a plausible but nonstandard variant of the small-set expansion problem.

View on arXiv
Comments on this paper