Learning GMMs with Nearly Optimal Robustness Guarantees
Annual Conference Computational Learning Theory (COLT), 2021
Abstract
In this work we solve the problem of robustly learning a high-dimensional Gaussian mixture model with components from -corrupted samples up to accuracy in total variation distance for any constant and with mild assumptions on the mixture. This robustness guarantee is optimal up to polylogarithmic factors. At the heart of our algorithm is a new way to relax a system of polynomial equations which corresponds to solving an improper learning problem where we are allowed to output a Gaussian mixture model whose weights are low-degree polynomials.
View on arXivComments on this paper
