54
22

Gradient-based training of Gaussian Mixture Models in High-Dimensional Spaces

Abstract

We present an approach for efficiently training GMMs solely with Stochastic Gradient Descent (SGD) on huge amounts of non-stationary, high-dimensional data. In such scenarios, SGD is superior to the traditionally Expectation-Maximization (EM) algorithm w.r.t. execution time and memory usage, and additional admits the use of small batch sizes. To use SGD in high-dimensional spaces, we propose to maximize a lower bound of a GMMs log-likelihood, which we prove to be feasible, justifiable by experiments and numerically stable. Since SGD seems more prone to get stuck in local optima than EM during early training phases, we introduce an annealing procedure that initially penalizes a large class of degenerate solutions before transitioning into a "normal" training regime. Experiments on several image datasets show that our approach is realizable, efficient and achieves comparable log-likelihood values as EM in a variety of scenarios. A TensorFlow implementation is provided to allow for reproduction.

View on arXiv
Comments on this paper