Lookahead Q-Cache: Achieving More Consistent KV Cache Eviction via Pseudo Query

Large language models (LLMs) rely on key-value cache (KV cache) to accelerate decoding by reducing redundant computations. However, the KV cache memory usage grows substantially with longer text sequences, posing challenges for efficient deployment. Existing KV cache eviction methods prune tokens using prefilling-stage attention scores, causing inconsistency with actual inference queries, especially under tight memory budgets. In this paper, we propose Lookahead Q-Cache (LAQ), a novel eviction framework that generates low-cost pseudo lookahead queries to better approximate the true decoding-stage queries. By using these lookahead queries as the observation window for importance estimation, LAQ achieves more consistent and accurate KV cache eviction aligned with real inference scenarios. Experimental results on LongBench and Needle-in-a-Haystack benchmarks show that LAQ outperforms existing methods across various budget levels, achieving a 1 4 point improvement on LongBench under limited cache budget. Moreover, LAQ is complementary to existing approaches and can be flexibly combined to yield further improvements.
View on arXiv@article{wang2025_2505.20334, title={ Lookahead Q-Cache: Achieving More Consistent KV Cache Eviction via Pseudo Query }, author={ Yixuan Wang and Shiyu Ji and Yijun Liu and Yuzhuang Xu and Yang Xu and Qingfu Zhu and Wanxiang Che }, journal={arXiv preprint arXiv:2505.20334}, year={ 2025 } }