89
1
v1v2 (latest)

A3 : an Analytical Low-Rank Approximation Framework for Attention

Main:9 Pages
6 Figures
Bibliography:2 Pages
7 Tables
Appendix:16 Pages
Abstract

Large language models have demonstrated remarkable performance; however, their massive parameter counts make deployment highly expensive. Low-rank approximation offers a promising compression solution, yet existing approaches have two main limitations: (1) They focus on minimizing the output error of individual linear layers, without considering the architectural characteristics of Transformers, and (2) they decompose a large weight matrix into two small low-rank matrices. Consequently, these methods often fall short compared to other compression techniques like pruning and quantization, and introduce runtime overhead such as the extra GEMM kernel launches for decomposed small matrices. To address these limitations, we propose \tt A^\tt 3, a post-training low-rank approximation framework. \tt A^\tt 3 splits a Transformer layer into three functional components, namely QK\tt QK, OV\tt OV, and MLP\tt MLP. For each component, \tt A^\tt 3 provides an analytical solution that reduces the hidden dimension size inside each component while minimizing the component's functional loss (i.e.\it i.e., error in attention scores, attention outputs, and MLP outputs). This approach directly reduces model sizes, KV cache sizes, and FLOPs without introducing any runtime overheads. In addition, it provides a new narrative in advancing the optimization problem from singular linear layer loss optimization toward improved end-to-end performance. Through extensive experiments, we show that \tt A^\tt 3 maintains superior performance compared to SoTAs. For example, under the same reduction budget in computation and memory, our low-rank approximated LLaMA 3.1-70B achieves a perplexity of 4.69 on WikiText-2, outperforming the previous SoTA's 7.87 by 3.18. We also demonstrate the versatility of \tt A^\tt 3, including KV cache compression, quantization, and mixed-rank assignments for enhanced performance.

View on arXiv
@article{wong2025_2505.12942,
  title={ A3 : an Analytical Low-Rank Approximation Framework for Attention },
  author={ Jeffrey T. H. Wong and Cheng Zhang and Xinye Cao and Pedro Gimenes and George A. Constantinides and Wayne Luk and Yiren Zhao },
  journal={arXiv preprint arXiv:2505.12942},
  year={ 2025 }
}
Comments on this paper

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.