ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.15353
87
0

Revealing Language Model Trajectories via Kullback-Leibler Divergence

21 May 2025
Ryo Kishino
Yusuke Takase
Momose Oyama
Hiroaki Yamagiwa
Hidetoshi Shimodaira
ArXiv (abs)PDFHTML
Main:4 Pages
24 Figures
Bibliography:7 Pages
9 Tables
Appendix:9 Pages
Abstract

A recently proposed method enables efficient estimation of the KL divergence between language models, including models with different architectures, by assigning coordinates based on log-likelihood vectors. To better understand the behavior of this metric, we systematically evaluate KL divergence across a wide range of conditions using publicly available language models. Our analysis covers comparisons between pretraining checkpoints, fine-tuned and base models, and layers via the logit lens. We find that trajectories of language models, as measured by KL divergence, exhibit a spiral structure during pretraining and thread-like progressions across layers. Furthermore, we show that, in terms of diffusion exponents, model trajectories in the log-likelihood space are more constrained than those in weight space.

View on arXiv
@article{kishino2025_2505.15353,
  title={ Revealing Language Model Trajectories via Kullback-Leibler Divergence },
  author={ Ryo Kishino and Yusuke Takase and Momose Oyama and Hiroaki Yamagiwa and Hidetoshi Shimodaira },
  journal={arXiv preprint arXiv:2505.15353},
  year={ 2025 }
}
Comments on this paper