ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.14178
2
0

Tokenization Constraints in LLMs: A Study of Symbolic and Arithmetic Reasoning Limits

20 May 2025
Xiang Zhang
Juntai Cao
Jiaqi Wei
Yiwei Xu
Chenyu You
    LRM
ArXivPDFHTML
Abstract

Tokenization is the first - and often underappreciated - layer of computation in language models. While Chain-of-Thought (CoT) prompting enables transformer models to approximate recurrent computation by externalizing intermediate steps, we show that the success of such reasoning is fundamentally bounded by the structure of tokenized inputs. This work presents a theoretical and empirical investigation into how tokenization schemes, particularly subword-based methods like byte-pair encoding (BPE), impede symbolic computation by merging or obscuring atomic reasoning units. We introduce the notion of Token Awareness to formalize how poor token granularity disrupts logical alignment and prevents models from generalizing symbolic procedures. Through systematic evaluation on arithmetic and symbolic tasks, we demonstrate that token structure dramatically affect reasoning performance, causing failure even with CoT, while atomically-aligned formats unlock strong generalization, allowing small models (e.g., GPT-4o-mini) to outperform larger systems (e.g., o1) in structured reasoning. Our findings reveal that symbolic reasoning ability in LLMs is not purely architectural, but deeply conditioned on token-level representations.

View on arXiv
@article{zhang2025_2505.14178,
  title={ Tokenization Constraints in LLMs: A Study of Symbolic and Arithmetic Reasoning Limits },
  author={ Xiang Zhang and Juntai Cao and Jiaqi Wei and Yiwei Xu and Chenyu You },
  journal={arXiv preprint arXiv:2505.14178},
  year={ 2025 }
}
Comments on this paper