50
0

HATA: Trainable and Hardware-Efficient Hash-Aware Top-k Attention for Scalable Large Model Inference

Main:9 Pages
10 Figures
Bibliography:2 Pages
11 Tables
Appendix:5 Pages
Abstract

Large Language Models (LLMs) have emerged as a pivotal research area, yet the attention module remains a critical bottleneck in LLM inference, even with techniques like KVCache to mitigate redundant computations. While various top-kk attention mechanisms have been proposed to accelerate LLM inference by exploiting the inherent sparsity of attention, they often struggled to strike a balance between efficiency and accuracy. In this paper, we introduce HATA (Hash-Aware Top-kk Attention), a novel approach that systematically integrates low-overhead learning-to-hash techniques into the Top-kk attention process. Different from the existing top-k attention methods which are devoted to seeking an absolute estimation of qk score, typically with a great cost, HATA maps queries and keys into binary hash codes, and acquires the relative qk score order with a quite low cost, which is sufficient for realizing top-k attention. Extensive experiments demonstrate that HATA achieves up to 7.2×\times speedup compared to vanilla full attention while maintaining model accuracy. In addition, HATA outperforms the state-of-the-art top-kk attention methods in both accuracy and efficiency across multiple mainstream LLM models and diverse tasks. HATA is open source atthis https URL.

View on arXiv
@article{gong2025_2506.02572,
  title={ HATA: Trainable and Hardware-Efficient Hash-Aware Top-k Attention for Scalable Large Model Inference },
  author={ Ping Gong and Jiawei Yi and Shengnan Wang and Juncheng Zhang and Zewen Jin and Ouxiang Zhou and Ruibo Liu and Guanbin Xu and Youhui Bai and Bowen Ye and Kun Yuan and Tong Yang and Gong Zhang and Renhai Chen and Feng Wu and Cheng Li },
  journal={arXiv preprint arXiv:2506.02572},
  year={ 2025 }
}
Comments on this paper