ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.14808
19
0

On Self-improving Token Embeddings

21 April 2025
Mario M. Kubek
Shiraj Pokharel
Thomas Böhme
Emma L. McDaniel
Herwig Unger
Armin R. Mikler
    AI4TS
ArXivPDFHTML
Abstract

This article introduces a novel and fast method for refining pre-trained static word or, more generally, token embeddings. By incorporating the embeddings of neighboring tokens in text corpora, it continuously updates the representation of each token, including those without pre-assigned embeddings. This approach effectively addresses the out-of-vocabulary problem, too. Operating independently of large language models and shallow neural networks, it enables versatile applications such as corpus exploration, conceptual search, and word sense disambiguation. The method is designed to enhance token representations within topically homogeneous corpora, where the vocabulary is restricted to a specific domain, resulting in more meaningful embeddings compared to general-purpose pre-trained vectors. As an example, the methodology is applied to explore storm events and their impacts on infrastructure and communities using narratives from a subset of the NOAA Storm Events database. The article also demonstrates how the approach improves the representation of storm-related terms over time, providing valuable insights into the evolving nature of disaster narratives.

View on arXiv
@article{kubek2025_2504.14808,
  title={ On Self-improving Token Embeddings },
  author={ Mario M. Kubek and Shiraj Pokharel and Thomas Böhme and Emma L. McDaniel and Herwig Unger and Armin R. Mikler },
  journal={arXiv preprint arXiv:2504.14808},
  year={ 2025 }
}
Comments on this paper