Turning Trash into Treasure: Accelerating Inference of Large Language Models with Token Recycling

Massive parameters of LLMs have made inference latency a fundamental bottleneck. Speculative decoding represents a lossless approach to accelerate inference through a guess-and-verify paradigm. Some methods rely on additional architectures to guess draft tokens, which need extra training before use. Alternatively, retrieval-based training-free techniques build libraries from pre-existing corpora or by n-gram generation. However, they face challenges like large storage requirements, time-consuming retrieval, and limited adaptability. Observing that candidate tokens generated during the decoding process are likely to reoccur in future sequences, we propose Token Recycling. It stores candidate tokens in an adjacency matrix and employs a breadth-first-search (BFS)-like algorithm to construct a draft tree, which is then validated through tree attention. New candidate tokens from the decoding process are then used to update the matrix. Token Recycling requires \textless2MB of additional storage and achieves approximately 2x speedup across all sizes of LLMs. It significantly outperforms existing train-free methods by 30\% and even a widely recognized training method by 25\%.
View on arXiv@article{luo2025_2408.08696, title={ Turning Trash into Treasure: Accelerating Inference of Large Language Models with Token Recycling }, author={ Xianzhen Luo and Yixuan Wang and Qingfu Zhu and Zhiming Zhang and Xuanyu Zhang and Qing Yang and Dongliang Xu }, journal={arXiv preprint arXiv:2408.08696}, year={ 2025 } }