NQKV: A KV Cache Quantization Scheme Based on Normal Distribution Characteristics

Large Language Models (LLMs) have demonstrated remarkable proficiency across a wide range of tasks. However, LLMs often require larger batch sizes to enhance throughput or longer context lengths to meet task demands, which significantly increases the memory resource consumption of the Key-Value (KV) cache during inference, becoming a major bottleneck in LLM deployment. To address this issue, quantization is a common and straightforward approach. Currently, quantization methods for activations are limited to 8-bit, and quantization to even lower bits can lead to substantial accuracy drops. To further save space by quantizing the KV cache to even lower bits, we analyzed the element distribution of the KV cache and designed the NQKV algorithm. Since the elements within each block of the KV cache follow a normal distribution, NQKV employs per-block quantile quantization to achieve information-theoretically optimal quantization error. Without significantly compromising model output quality, NQKV enables the OPT model to perform inference with an 2x larger batch size or a 4x longer context length, and it improves throughput by 9.3x compared to when the KV cache is not used.
View on arXiv@article{cai2025_2505.16210, title={ NQKV: A KV Cache Quantization Scheme Based on Normal Distribution Characteristics }, author={ Zhihang Cai and Xingjun Zhang and Zhendong Tan and Zheng Wei }, journal={arXiv preprint arXiv:2505.16210}, year={ 2025 } }