19
135

Robustly Learning a Gaussian: Getting Optimal Error, Efficiently

Abstract

We study the fundamental problem of learning the parameters of a high-dimensional Gaussian in the presence of noise -- where an ε\varepsilon-fraction of our samples were chosen by an adversary. We give robust estimators that achieve estimation error O(ε)O(\varepsilon) in the total variation distance, which is optimal up to a universal constant that is independent of the dimension. In the case where just the mean is unknown, our robustness guarantee is optimal up to a factor of 2\sqrt{2} and the running time is polynomial in dd and 1/ϵ1/\epsilon. When both the mean and covariance are unknown, the running time is polynomial in dd and quasipolynomial in 1/ε1/\varepsilon. Moreover all of our algorithms require only a polynomial number of samples. Our work shows that the same sorts of error guarantees that were established over fifty years ago in the one-dimensional setting can also be achieved by efficient algorithms in high-dimensional settings.

View on arXiv
Comments on this paper