ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.03523
48
0

TokAlign: Efficient Vocabulary Adaptation via Token Alignment

4 June 2025
Chong Li
Jiajun Zhang
Chengqing Zong
    VLM
ArXiv (abs)PDFHTML
Main:7 Pages
14 Figures
Bibliography:6 Pages
13 Tables
Appendix:5 Pages
Abstract

Tokenization serves as a foundational step for Large Language Models (LLMs) to process text. In new domains or languages, the inefficiency of the tokenizer will slow down the training and generation of LLM. The mismatch in vocabulary also hinders deep knowledge transfer between LLMs like token-level distillation. To mitigate this gap, we propose an efficient method named TokAlign to replace the vocabulary of LLM from the token co-occurrences view, and further transfer the token-level knowledge between models. It first aligns the source vocabulary to the target one by learning a one-to-one mapping matrix for token IDs. Model parameters, including embeddings, are rearranged and progressively fine-tuned for the new vocabulary. Our method significantly improves multilingual text compression rates and vocabulary initialization for LLMs, decreasing the perplexity from 3.4e2\text{e}^2e2 of strong baseline methods to 1.2e2\text{e}^2e2 after initialization. Experimental results on models across multiple parameter scales demonstrate the effectiveness and generalization of TokAlign, which costs as few as 5k steps to restore the performance of the vanilla model. After unifying vocabularies between LLMs, token-level distillation can remarkably boost (+4.4% than sentence-level distillation) the base model, costing only 235M tokens.

View on arXiv
@article{li2025_2506.03523,
  title={ TokAlign: Efficient Vocabulary Adaptation via Token Alignment },
  author={ Chong Li and Jiajun Zhang and Chengqing Zong },
  journal={arXiv preprint arXiv:2506.03523},
  year={ 2025 }
}
Comments on this paper