354

Sample-Efficient Learning of Mixtures

AAAI Conference on Artificial Intelligence (AAAI), 2017
Abstract

We consider PAC learning of probability distributions (a.k.a. density estimation), where we are given an i.i.d. sample generated from an unknown target distribution, and want to output a distribution that is close to the target in total variation distance. Let F\mathcal F be an arbitrary class of probability distributions, and let Fk\mathcal{F}^k denote the class of kk-mixtures of elements of F\mathcal F. Assuming the existence of a method for learning F\mathcal F with sample complexity mF(ε)m_{\mathcal{F}}(\varepsilon) in the realizable setting, we provide a method for learning Fk\mathcal F^k with sample complexity O(klogkmF(ε)/ε2)O({k\log k \cdot m_{\mathcal F}(\varepsilon) }/{\varepsilon^{2}}) in the agnostic setting. Our mixture learning algorithm has the property that, if the F\mathcal F-learner is proper, then the Fk\mathcal F^k-learner is proper as well. We provide two applications of our main result. First, we show that the class of mixtures of kk axis-aligned Gaussians in Rd\mathbb{R}^d is PAC-learnable in the agnostic setting with sample complexity O~(kd/ϵ4)\widetilde{O}({kd}/{\epsilon ^ 4}), which is tight in kk and dd. Second, we show that the class of mixtures of kk Gaussians in Rd\mathbb{R}^d is PAC-learnable in the agnostic setting with sample complexity O~(kd2/ϵ4)\widetilde{O}({kd^2}/{\epsilon ^ 4}), which improves the previous known bounds of O~(k3d2/ε4)\widetilde{O}({k^3d^2}/{\varepsilon ^ 4}) and O~(k4d4/ε2)\widetilde{O}(k^4d^4/\varepsilon ^ 2) in its dependence on kk and dd.

View on arXiv
Comments on this paper