342

Learning GMMs with Nearly Optimal Robustness Guarantees

Annual Conference Computational Learning Theory (COLT), 2021
Abstract

In this work we solve the problem of robustly learning a high-dimensional Gaussian mixture model with kk components from ϵ\epsilon-corrupted samples up to accuracy O~(ϵ)\widetilde{O}(\epsilon) in total variation distance for any constant kk and with mild assumptions on the mixture. This robustness guarantee is optimal up to polylogarithmic factors. At the heart of our algorithm is a new way to relax a system of polynomial equations which corresponds to solving an improper learning problem where we are allowed to output a Gaussian mixture model whose weights are low-degree polynomials.

View on arXiv
Comments on this paper