69
1

Accurate KV Cache Quantization with Outlier Tokens Tracing

Main:8 Pages
11 Figures
Bibliography:3 Pages
18 Tables
Appendix:10 Pages
Abstract

The impressive capabilities of Large Language Models (LLMs) come at the cost of substantial computational resources during deployment. While KV Cache can significantly reduce recomputation during inference, it also introduces additional memory overhead. KV Cache quantization presents a promising solution, striking a good balance between memory usage and accuracy. Previous research has shown that the Keys are distributed by channel, while the Values are distributed by token. Consequently, the common practice is to apply channel-wise quantization to the Keys and token-wise quantization to the Values. However, our further investigation reveals that a small subset of unusual tokens exhibit unique characteristics that deviate from this pattern, which can substantially impact quantization accuracy. To address this, we develop a simple yet effective method to identify these tokens accurately during the decoding process and exclude them from quantization as outlier tokens, significantly improving overall accuracy. Extensive experiments show that our method achieves significant accuracy improvements under 2-bit quantization and can deliver a 6.4 times reduction in memory usage and a 2.3 times increase in throughput.

View on arXiv
@article{su2025_2505.10938,
  title={ Accurate KV Cache Quantization with Outlier Tokens Tracing },
  author={ Yi Su and Yuechi Zhou and Quantong Qiu and Juntao Li and Qingrong Xia and Ping Li and Xinyu Duan and Zhefeng Wang and Min Zhang },
  journal={arXiv preprint arXiv:2505.10938},
  year={ 2025 }
}
Comments on this paper

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.