ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2012.14670
13
9

Fast Incremental Expectation Maximization for finite-sum optimization: nonasymptotic convergence

29 December 2020
G. Fort
Pierre Gach
Eric Moulines
ArXivPDFHTML
Abstract

Fast Incremental Expectation Maximization (FIEM) is a version of the EM framework for large datasets. In this paper, we first recast FIEM and other incremental EM type algorithms in the {\em Stochastic Approximation within EM} framework. Then, we provide nonasymptotic bounds for the convergence in expectation as a function of the number of examples nnn and of the maximal number of iterations \kmax\kmax\kmax. We propose two strategies for achieving an ϵ\epsilonϵ-approximate stationary point, respectively with \kmax=O(n2/3/ϵ)\kmax = O(n^{2/3}/\epsilon)\kmax=O(n2/3/ϵ) and \kmax=O(n/ϵ3/2)\kmax = O(\sqrt{n}/\epsilon^{3/2})\kmax=O(n​/ϵ3/2), both strategies relying on a random termination rule before \kmax\kmax\kmax and on a constant step size in the Stochastic Approximation step. Our bounds provide some improvements on the literature. First, they allow \kmax\kmax\kmax to scale as n\sqrt{n}n​ which is better than n2/3n^{2/3}n2/3 which was the best rate obtained so far; it is at the cost of a larger dependence upon the tolerance ϵ\epsilonϵ, thus making this control relevant for small to medium accuracy with respect to the number of examples nnn. Second, for the n2/3n^{2/3}n2/3-rate, the numerical illustrations show that thanks to an optimized choice of the step size and of the bounds in terms of quantities characterizing the optimization problem at hand, our results desig a less conservative choice of the step size and provide a better control of the convergence in expectation.

View on arXiv
Comments on this paper