49
58

Random Walks on Context Spaces: Towards an Explanation of the Mysteries of Semantic Word Embeddings

Abstract

The papers of Mikolov et al. 2013 as well as subsequent works have led to dramatic progress in solving word analogy tasks using semantic word embeddings. This leverages linear structure that is often found in the word embeddings, which is surprising since the training method is usually nonlinear. There were attempts ---notably by Levy and Goldberg and Pennington et al.--- to explain how this linear structure arises. The current paper points out the gaps in these explanations and provides a more complete explanation using a loglinear generative model for the corpus that directly models the latent semantic structure in words. The novel methodological twist is that instead of trying to fit the best model parameters to the data, a rigorous mathematical analysis is performed using the model priors to arrive at a simple closed form expression that approximately relates co-occurrence statistics and word embeddings. This expression closely corresponds to ---and a bit simpler than--- the existing training methods, and leads to good solutions to analogy tasks. Empirical support is provided also for the validity of the modeling assumptions. This methodology of letting some mathematical analysis substitute for some of the computational difficulty may be useful in other settings with generative models.

View on arXiv
Comments on this paper