7
0

ABBA: Highly Expressive Hadamard Product Adaptation for Large Language Models

Abstract

Large Language Models have demonstrated strong performance across a wide range of tasks, but adapting them efficiently to new domains remains a key challenge. Parameter-Efficient Fine-Tuning (PEFT) methods address this by introducing lightweight, trainable modules while keeping most pre-trained weights fixed. The prevailing approach, LoRA, models updates using a low-rank decomposition, but its expressivity is inherently constrained by the rank. Recent methods like HiRA aim to increase expressivity by incorporating a Hadamard product with the frozen weights, but still rely on the structure of the pre-trained model. We introduce ABBA, a new PEFT architecture that reparameterizes the update as a Hadamard product of two independently learnable low-rank matrices. In contrast to prior work, ABBA fully decouples the update from the pre-trained weights, enabling both components to be optimized freely. This leads to significantly higher expressivity under the same parameter budget. We formally analyze ABBA's expressive capacity and validate its advantages through matrix reconstruction experiments. Empirically, ABBA achieves state-of-the-art results on arithmetic and commonsense reasoning benchmarks, consistently outperforming existing PEFT methods by a significant margin across multiple models. Our code is publicly available at:this https URL.

View on arXiv
@article{singhal2025_2505.14238,
  title={ ABBA: Highly Expressive Hadamard Product Adaptation for Large Language Models },
  author={ Raghav Singhal and Kaustubh Ponkshe and Rohit Vartak and Praneeth Vepakomma },
  journal={arXiv preprint arXiv:2505.14238},
  year={ 2025 }
}
Comments on this paper