ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.00178
50
0

Boundless Byte Pair Encoding: Breaking the Pre-tokenization Barrier

31 March 2025
Craig W. Schmidt
Varshini Reddy
Chris Tanner
Yuval Pinter
ArXivPDFHTML
Abstract

Pre-tokenization, the initial step in many modern tokenization pipelines, segments text into smaller units called pretokens, typically splitting on whitespace and punctuation. While this process encourages having full, individual words as tokens, it introduces a fundamental limitation in most tokenization algorithms such as Byte Pair Encoding (BPE). Specifically, pre-tokenization causes the distribution of tokens in a corpus to heavily skew towards common, full-length words. This skewed distribution limits the benefits of expanding to larger vocabularies, since the additional tokens appear with progressively lower counts. To overcome this barrier, we propose BoundlessBPE, a modified BPE algorithm that relaxes the pretoken boundary constraint. Our approach selectively merges two complete pretokens into a larger unit we term a superword. Superwords are not necessarily semantically cohesive. For example, the pretokens " of" and " the" might be combined to form the superword " of the". This merging strategy results in a substantially more uniform distribution of tokens across a corpus than standard BPE, and compresses text more effectively, with an approximate 20% increase in bytes per token.

View on arXiv
@article{schmidt2025_2504.00178,
  title={ Boundless Byte Pair Encoding: Breaking the Pre-tokenization Barrier },
  author={ Craig W. Schmidt and Varshini Reddy and Chris Tanner and Yuval Pinter },
  journal={arXiv preprint arXiv:2504.00178},
  year={ 2025 }
}
Comments on this paper