51
0

Learning Mixtures of Experts with EM: A Mirror Descent Perspective

Abstract

Classical Mixtures of Experts (MoE) are Machine Learning models that involve partitioning the input space, with a separate "expert" model trained on each partition. Recently, MoE-based model architectures have become popular as a means to reduce training and inference costs. There, the partitioning function and the experts are both learnt jointly via gradient descent-type methods on the log-likelihood. In this paper we study theoretical guarantees of the Expectation Maximization (EM) algorithm for the training of MoE models. We first rigorously analyze EM for MoE where the conditional distribution of the target and latent variable conditioned on the feature variable belongs to an exponential family of distributions and show its equivalence to projected Mirror Descent with unit step size and a Kullback-Leibler Divergence regularizer. This perspective allows us to derive new convergence results and identify conditions for local linear convergence; In the special case of mixture of 22 linear or logistic experts, we additionally provide guarantees for linear convergence based on the signal-to-noise ratio. Experiments on synthetic and (small-scale) real-world data supports that EM outperforms the gradient descent algorithm both in terms of convergence rate and the achieved accuracy.

View on arXiv
@article{fruytier2025_2411.06056,
  title={ Learning Mixtures of Experts with EM: A Mirror Descent Perspective },
  author={ Quentin Fruytier and Aryan Mokhtari and Sujay Sanghavi },
  journal={arXiv preprint arXiv:2411.06056},
  year={ 2025 }
}
Comments on this paper