25
0

MonarchAttention: Zero-Shot Conversion to Fast, Hardware-Aware Structured Attention

Main:9 Pages
7 Figures
Bibliography:4 Pages
3 Tables
Appendix:6 Pages
Abstract

Transformers have achieved state-of-the-art performance across various tasks, but suffer from a notable quadratic complexity in sequence length due to the attention mechanism. In this work, we propose MonarchAttention -- a novel approach to sub-quadratic attention approximation via Monarch matrices, an expressive class of structured matrices. Based on the variational form of softmax, we describe an efficient optimization-based algorithm to compute an approximate projection of softmax attention onto the class of Monarch matrices with Θ(NNd)\Theta(N\sqrt{N} d) computational complexity and Θ(Nd)\Theta(Nd) memory/IO complexity. Unlike previous approaches, MonarchAttention is both (1) transferable, yielding minimal performance loss with no additional training, even when replacing every attention layer of the transformer, and (2) hardware-efficient, utilizing the highest-throughput tensor core units on modern GPUs. With optimized kernels, MonarchAttention achieves substantial speed-ups in wall-time over FlashAttention-2: 1.4×1.4\times for shorter sequences (N=256)(N=256), 4.5×4.5\times for medium-length sequences (N=4K)(N=4K), and 8.2×8.2\times for longer sequences (N=16K)(N=16K). We demonstrate the quality of MonarchAttention on diverse tasks and architectures in vision and language problems, showing that it flexibly and accurately approximates softmax attention in a variety of contexts. Our code is available atthis https URL.

View on arXiv
@article{yaras2025_2505.18698,
  title={ MonarchAttention: Zero-Shot Conversion to Fast, Hardware-Aware Structured Attention },
  author={ Can Yaras and Alec S. Xu and Pierre Abillama and Changwoo Lee and Laura Balzano },
  journal={arXiv preprint arXiv:2505.18698},
  year={ 2025 }
}
Comments on this paper