5

Distilling Token-Trained Models into Byte-Level Models

Zishuo Bao
Jiaqi Leng
Junxiong Wang
Bowen Peng
Yucheng Lu
Main:8 Pages
4 Figures
Bibliography:2 Pages
13 Tables
Appendix:7 Pages
Abstract

Byte Language Models (BLMs) have emerged as a promising direction for scaling language models beyond tokenization. However, existing BLMs typically require training from scratch on trillions of bytes, making them prohibitively expensive. In this paper, we propose an efficient distillation recipe that converts existing token-trained LLMs into BLMs while retaining comparable capabilities. Our recipe follows a two-stage curriculum: (1) Progressive Knowledge Distillation, which aligns byte-level representations with the embeddings of the token-trained teacher model; and (2) Byte-Level Supervised Fine-Tuning, which enables end-to-end generation entirely in the byte space. We validate our approach across multiple model families, including Llama, Qwen, and OLMo, and demonstrate that the distilled BLMs retain most of the teacher models' performance using only approximately 125B bytes.

View on arXiv
Comments on this paper