Semantic word embeddings represent the meaning of a word via a vector, and are created by diverse methods such as Latent Semantic Analysis (LSA), generative text models such as topic models, matrix factorization, neural nets, and energy-based models. Many methods use nonlinear operations ---such as Pairwise Mutual Information or PMI--- on co-occurrence statistics, and have hand-tuned hyperparameters and reweightings. Often a {\em generative model} can help provide theoretical insight into such modeling choices, but there appears to be no such model to "explain" the above nonlinear models. For example, we know of no generative model for which the correct solution is the usual (dimension-restricted) PMI model. This paper gives a new generative model, a dynamic version of the loglinear topic model of \citet{mnih2007three}. The methodological novelty is to use the prior to compute {\em closed form} expressions for word statistics. These provide an explanation for nonlinear models like PMI, {\bf word2vec}, and GloVe, as well as some hyperparameter choices. Experimental support is provided for the generative model assumptions, the most important of which is that latent word vectors are fairly uniformly dispersed ("isotropic") in space. The model also helps explain why low-dimensional semantic embeddings contain linear algebraic structure that allows solution of word analogies, as shown by~\citet{mikolov2013efficient} and many subsequent papers.
View on arXiv