ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2104.09665
21
15

Learning GMMs with Nearly Optimal Robustness Guarantees

19 April 2021
Allen Liu
Ankur Moitra
ArXivPDFHTML
Abstract

In this work we solve the problem of robustly learning a high-dimensional Gaussian mixture model with kkk components from ϵ\epsilonϵ-corrupted samples up to accuracy O~(ϵ)\widetilde{O}(\epsilon)O(ϵ) in total variation distance for any constant kkk and with mild assumptions on the mixture. This robustness guarantee is optimal up to polylogarithmic factors. The main challenge is that most earlier works rely on learning individual components in the mixture, but this is impossible in our setting, at least for the types of strong robustness guarantees we are aiming for. Instead we introduce a new framework which we call {\em strong observability} that gives us a route to circumvent this obstacle.

View on arXiv
Comments on this paper