ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.13837
63
0

Self-Vocabularizing Training for Neural Machine Translation

18 March 2025
Pin-Jie Lin
Ernie Chang
Yangyang Shi
Vikas Chandra
ArXivPDFHTML
Abstract

Past vocabulary learning techniques identify relevant vocabulary before training, relying on statistical and entropy-based assumptions that largely neglect the role of model training. Empirically, we observe that trained translation models are induced to use a byte-pair encoding (BPE) vocabulary subset distinct from the original BPE vocabulary, leading to performance improvements when retrained with the induced vocabulary. In this paper, we analyze this discrepancy in neural machine translation by examining vocabulary and entropy shifts during self-training--where each iteration generates a labeled dataset by pairing source sentences with the model's predictions to define a new vocabulary. Building on these insights, we propose self-vocabularizing training, an iterative method that self-selects a smaller, more optimal vocabulary, yielding up to a 1.49 BLEU improvement. Moreover, we find that deeper model architectures lead to both an increase in unique token usage and a 6-8% reduction in vocabulary size.

View on arXiv
@article{lin2025_2503.13837,
  title={ Self-Vocabularizing Training for Neural Machine Translation },
  author={ Pin-Jie Lin and Ernie Chang and Yangyang Shi and Vikas Chandra },
  journal={arXiv preprint arXiv:2503.13837},
  year={ 2025 }
}
Comments on this paper