22
0

DAM: Dynamic Attention Mask for Long-Context Large Language Model Inference Acceleration

Main:9 Pages
9 Figures
Bibliography:2 Pages
2 Tables
Appendix:3 Pages
Abstract

Long-context understanding is crucial for many NLP applications, yet transformers struggle with efficiency due to the quadratic complexity of self-attention. Sparse attention methods alleviate this cost but often impose static, predefined masks, failing to capture heterogeneous attention patterns. This results in suboptimal token interactions, limiting adaptability and retrieval accuracy in long-sequence tasks. This work introduces a dynamic sparse attention mechanism that assigns adaptive masks at the attention-map level, preserving heterogeneous patterns across layers and heads. Unlike existing approaches, our method eliminates the need for fine-tuning and predefined mask structures while maintaining computational efficiency. By learning context-aware attention structures, it achieves high alignment with full-attention models, ensuring minimal performance degradation while reducing memory and compute overhead. This approach provides a scalable alternative to full attention, enabling the practical deployment of large-scale Large Language Models (LLMs) without sacrificing retrieval performance. DAM is available at:this https URL.

View on arXiv
@article{zhang2025_2506.11104,
  title={ DAM: Dynamic Attention Mask for Long-Context Large Language Model Inference Acceleration },
  author={ Hanzhi Zhang and Heng Fan and Kewei Sha and Yan Huang and Yunhe Feng },
  journal={arXiv preprint arXiv:2506.11104},
  year={ 2025 }
}
Comments on this paper