92
1
v1v2 (latest)

StochasTok: Improving Fine-Grained Subword Understanding in LLMs

Main:16 Pages
16 Figures
Bibliography:5 Pages
4 Tables
Appendix:1 Pages
Abstract

Subword-level understanding is integral to numerous tasks, including understanding multi-digit numbers, spelling mistakes, abbreviations, rhyming, and wordplay. Despite this, current large language models (LLMs) still often struggle with seemingly simple subword-level tasks like How many 'r's in 'strawberry'?. A key factor behind these failures is tokenization which obscures the fine-grained structure of words. Current alternatives, such as character-level and dropout tokenization methods, significantly increase computational costs and provide inconsistent improvements. In this paper we revisit tokenization and introduce StochasTok, a simple, efficient stochastic tokenization scheme that randomly splits tokens during training, allowing LLMs to 'see' their internal structure. Our experiments show that pretraining with StochasTok substantially improves LLMs' downstream performance across multiple subword-level language games, including character counting, substring identification, and math tasks. Furthermore, StochasTok's simplicity allows seamless integration at any stage of the training pipeline; and we demonstrate that post-training with StochasTok can instill improved subword understanding into existing pretrained models, thus avoiding costly pretraining from scratch. These dramatic improvements achieved with a minimal change suggest StochasTok holds exciting potential when applied to larger, more capable models. Code open-sourced at:this https URL.

View on arXiv
@article{sims2025_2506.01687,
  title={ StochasTok: Improving Fine-Grained Subword Understanding in LLMs },
  author={ Anya Sims and Thom Foster and Klara Kaleb and Tuan-Duy H. Nguyen and Joseph Lee and Jakob N. Foerster and Yee Whye Teh and Cong Lu },
  journal={arXiv preprint arXiv:2506.01687},
  year={ 2025 }
}
Comments on this paper