Date Fragments: A Hidden Bottleneck of Tokenization for Temporal Reasoning

Modern BPE tokenizers often split calendar dates into meaningless fragments, e.g., 20250312 202, 503, 12, inflating token counts and obscuring the inherent structure needed for robust temporal reasoning. In this work, we (1) introduce a simple yet interpretable metric, termed date fragmentation ratio, that measures how faithfully a tokenizer preserves multi-digit date components; (2) release DateAugBench, a suite of 6500 examples spanning three temporal reasoning tasks: context-based date resolution, format-invariance puzzles, and date arithmetic across historical, contemporary, and future time periods; and (3) through layer-wise probing and causal attention-hop analyses, uncover an emergent date-abstraction mechanism whereby large language models stitch together the fragments of month, day, and year components for temporal reasoning. Our experiments show that excessive fragmentation correlates with accuracy drops of up to 10 points on uncommon dates like historical and futuristic dates. Further, we find that the larger the model, the faster the emergent date abstraction that heals date fragments is accomplished. Lastly, we observe a reasoning path that LLMs follow to assemble date fragments, typically differing from human interpretation (year month day). Our datasets and code are made publicly available \href{this https URL}{here}.
View on arXiv@article{bhatia2025_2505.16088, title={ Date Fragments: A Hidden Bottleneck of Tokenization for Temporal Reasoning }, author={ Gagan Bhatia and Maxime Peyrard and Wei Zhao }, journal={arXiv preprint arXiv:2505.16088}, year={ 2025 } }