27
0

Two Counterexamples to Tokenization and the Noiseless Channel

Abstract

In Tokenization and the Noiseless Channel (Zouhar et al., 2023a), R\ényi efficiency is suggested as an intrinsic mechanism for evaluating a tokenizer: for NLP tasks, the tokenizer which leads to the highest R\ényi efficiency of the unigram distribution should be chosen. The R\ényi efficiency is thus treated as a predictor of downstream performance (e.g., predicting BLEU for a machine translation task), without the expensive step of training multiple models with different tokenizers. Although useful, the predictive power of this metric is not perfect, and the authors note there are additional qualities of a good tokenization scheme that R\ényi efficiency alone cannot capture. We describe two variants of BPE tokenization which can arbitrarily increase R\ényi efficiency while decreasing the downstream model performance. These counterexamples expose cases where R\ényi efficiency fails as an intrinsic tokenization metric and thus give insight for building more accurate predictors.

View on arXiv
Comments on this paper