QKV Projections Require a Fraction of Their Memory
- MQVLM

The Multi-Head Attention mechanism is central to LLM operation, and multiple works target its compute and memory efficiency during training. While most works focus on approximating the scaled dot product, the memory consumption of the linear projections that compute the , , and tensors from the input is often overlooked. To address this, we propose Point-Approximate Matrix Multiplication (PAMM), a novel tensor compression technique that reduces memory consumption of the projections in attention layers by a factor of up to , effectively erasing their memory footprint, while achieving similar or better final perplexity. PAMM is fully composable with efficient attention techniques such as FlashAttention, making it a practical and complementary method for memory-efficient LLM training.
View on arXiv@article{khalf2025_2506.02939, title={ QKV Projections Require a Fraction of Their Memory }, author={ Malik Khalf and Yara Shamshoum and Nitzan Hodos and Yuval Sieradzki and Assaf Schuster }, journal={arXiv preprint arXiv:2506.02939}, year={ 2025 } }