23
5

Near-optimal Sample Complexity Bounds for Robust Learning of Gaussians Mixtures via Compression Schemes

Abstract

We prove that Θ~(kd2/ε2)\tilde{\Theta}(k d^2 / \varepsilon^2) samples are necessary and sufficient for learning a mixture of kk Gaussians in Rd\mathbb{R}^d, up to error ε\varepsilon in total variation distance. This improves both the known upper bounds and lower bounds for this problem. For mixtures of axis-aligned Gaussians, we show that O~(kd/ε2)\tilde{O}(k d / \varepsilon^2) samples suffice, matching a known lower bound. Moreover, these results hold in the agnostic-learning/robust-estimation setting as well, where the target distribution is only approximately a mixture of Gaussians. The upper bound is shown using a novel technique for distribution learning based on a notion of `compression.' Any class of distributions that allows such a compression scheme can also be learned with few samples. Moreover, if a class of distributions has such a compression scheme, then so do the classes of products and mixtures of those distributions. The core of our main result is showing that the class of Gaussians in Rd\mathbb{R}^d admits a small-sized compression scheme.

View on arXiv
Comments on this paper