87
1

Closed-Form Training Dynamics Reveal Learned Features and Linear Structure in Word2Vec-like Models

Abstract

Self-supervised word embedding algorithms such as word2vec provide a minimal setting for studying representation learning in language modeling. We examine the quartic Taylor approximation of the word2vec loss around the origin, and we show that both the resulting training dynamics and the final performance on downstream tasks are empirically very similar to those of word2vec. Our main contribution is to analytically solve for both the gradient flow training dynamics and the final word embeddings in terms of only the corpus statistics and training hyperparameters. The solutions reveal that these models learn orthogonal linear subspaces one at a time, each one incrementing the effective rank of the embeddings until model capacity is saturated. Training on Wikipedia, we find that each of the top linear subspaces represents an interpretable topic-level concept. Finally, we apply our theory to describe how linear representations of more abstract semantic concepts emerge during training; these can be used to complete analogies via vector addition.

View on arXiv
@article{karkada2025_2502.09863,
  title={ Closed-Form Training Dynamics Reveal Learned Features and Linear Structure in Word2Vec-like Models },
  author={ Dhruva Karkada and James B. Simon and Yasaman Bahri and Michael R. DeWeese },
  journal={arXiv preprint arXiv:2502.09863},
  year={ 2025 }
}
Comments on this paper