ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1502.03520
49
58
v1v2v3v4v5v6v7v8 (latest)

RAND-WALK: A Latent Variable Model Approach to Word Embeddings

12 February 2015
Sanjeev Arora
Yuanzhi Li
Yingyu Liang
Tengyu Ma
Andrej Risteski
ArXiv (abs)PDFHTML
Abstract

Semantic word embeddings represent the meaning of a word via a vector, and are created by diverse methods such as Latent Semantic Analysis (LSA), generative text models such as topic models, matrix factorization, neural nets, and energy-based models. Many methods use nonlinear operations ---such as Pairwise Mutual Information or PMI--- on co-occurrence statistics, and have hand-tuned hyperparameters and reweightings. Often a {\em generative model} can help provide theoretical insight into such modeling choices, but there appears to be no such model to "explain" the above nonlinear models. For example, we know of no generative model for which the correct solution is the usual (dimension-restricted) PMI model. This paper gives a new generative model, a dynamic version of the loglinear topic model of \citet{mnih2007three}. The methodological novelty is to use the prior to compute {\em closed form} expressions for word statistics. These provide an explanation for nonlinear models like PMI, {\bf word2vec}, and GloVe, as well as some hyperparameter choices. Experimental support is provided for the generative model assumptions, the most important of which is that latent word vectors are fairly uniformly dispersed ("isotropic") in space. The model also helps explain why low-dimensional semantic embeddings contain linear algebraic structure that allows solution of word analogies, as shown by~\citet{mikolov2013efficient} and many subsequent papers.

View on arXiv
Comments on this paper