39
0
v1v2 (latest)

Latent Multi-Head Attention for Small Language Models

Main:2 Pages
1 Figures
Bibliography:3 Pages
5 Tables
Appendix:1 Pages
Abstract

We present the first comprehensive study of latent multi-head attention (MLA) for small language models, revealing interesting efficiency-quality trade-offs. Training 30M-parameter GPT models on 100,000 synthetic stories, we benchmark three architectural variants: standard multi-head attention (MHA), MLA, and MLA with rotary positional embeddings (MLA+RoPE). Our key finding is that MLA+RoPE with half-rank latent dimensions (r = d/2) achieves a 45% KV-cache memory reduction while incurring only a 0.3% increase in validation loss (essentially matching MHA quality)- a Pareto improvement for memory constrained deployment. We further show that RoPE is crucial for MLA in small models: without it, MLA underperforms vanilla attention by 3-5%, but with RoPE, it surpasses vanilla by 2%. Inference benchmarks on NVIDIA A100 GPUs reveal that MLA with r=d/2 achieves a 1.4 times speedup over full-rank MLA while maintaining the memory savings. GPT-4 evaluations corroborate perplexity results, with ours achieving the highest quality scores (7.4/10) across grammar, creativity, and consistency metrics. Code and models will be released upon acceptance.

View on arXiv
@article{mehta2025_2506.09342,
  title={ Latent Multi-Head Attention for Small Language Models },
  author={ Sushant Mehta and Raj Dandekar and Rajat Dandekar and Sreedath Panat },
  journal={arXiv preprint arXiv:2506.09342},
  year={ 2025 }
}
Comments on this paper