Revisiting LoRA through the Lens of Parameter Redundancy: Spectral Encoding Helps

Low-Rank Adaptation (LoRA) has emerged as a prominent technique for fine-tuning large foundation models. Despite its successes, the substantial parameter redundancy, which limits the capacity and efficiency of LoRA, has been recognized as a bottleneck. In this work, we systematically investigate the impact of redundancy in fine-tuning LoRA and reveal that reducing density redundancy does not degrade expressiveness. Based on this insight, we introduce \underline{S}pectral-\underline{e}ncoding \underline{L}ow-\underline{R}ank \underline{A}daptation (SeLoRA), which harnesses the robust expressiveness of spectral bases to re-parameterize LoRA from a sparse spectral subspace. Designed with simplicity, SeLoRA enables seamless integration with various LoRA variants for performance boosting, serving as a scalable plug-and-play framework. Extensive experiments substantiate that SeLoRA achieves greater efficiency with fewer parameters, delivering superior performance enhancements over strong baselines on various downstream tasks, including commonsense reasoning, math reasoning, and code generation.
View on arXiv@article{cheng2025_2506.16787, title={ Revisiting LoRA through the Lens of Parameter Redundancy: Spectral Encoding Helps }, author={ Jiashun Cheng and Aochuan Chen and Nuo Chen and Ziqi Gao and Yuhan Li and Jia Li and Fugee Tsung }, journal={arXiv preprint arXiv:2506.16787}, year={ 2025 } }