ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1609.00368
21
120

Ten Steps of EM Suffice for Mixtures of Two Gaussians

1 September 2016
C. Daskalakis
Christos Tzamos
Manolis Zampetakis
ArXivPDFHTML
Abstract

The Expectation-Maximization (EM) algorithm is a widely used method for maximum likelihood estimation in models with latent variables. For estimating mixtures of Gaussians, its iteration can be viewed as a soft version of the k-means clustering algorithm. Despite its wide use and applications, there are essentially no known convergence guarantees for this method. We provide global convergence guarantees for mixtures of two Gaussians with known covariance matrices. We show that the population version of EM, where the algorithm is given access to infinitely many samples from the mixture, converges geometrically to the correct mean vectors, and provide simple, closed-form expressions for the convergence rate. As a simple illustration, we show that, in one dimension, ten steps of the EM algorithm initialized at infinity result in less than 1\% error estimation of the means. In the finite sample regime, we show that, under a random initialization, O~(d/ϵ2)\tilde{O}(d/\epsilon^2)O~(d/ϵ2) samples suffice to compute the unknown vectors to within ϵ\epsilonϵ in Mahalanobis distance, where ddd is the dimension. In particular, the error rate of the EM based estimator is O~(dn)\tilde{O}\left(\sqrt{d \over n}\right)O~(nd​​) where nnn is the number of samples, which is optimal up to logarithmic factors.

View on arXiv
Comments on this paper