RefLoRA: Refactored Low-Rank Adaptation for Efficient Fine-Tuning of Large Models

Low-Rank Adaptation (LoRA) lowers the computational and memory overhead of fine-tuning large models by updating a low-dimensional subspace of the pre-trained weight matrix. Albeit efficient, LoRA exhibits suboptimal convergence and noticeable performance degradation, due to inconsistent and imbalanced weight updates induced by its nonunique low-rank factorizations. To overcome these limitations, this article identifies the optimal low-rank factorization per step that minimizes an upper bound on the loss. The resultant refactored low-rank adaptation (RefLoRA) method promotes a flatter loss landscape, along with consistent and balanced weight updates, thus speeding up stable convergence. Extensive experiments evaluate RefLoRA on natural language understanding, and commonsense reasoning tasks with popular large language models including DeBERTaV3, LLaMA-7B, LLaMA2-7B and LLaMA3-8B. The numerical tests corroborate that RefLoRA converges faster, outperforms various benchmarks, and enjoys negligible computational overhead compared to state-of-the-art LoRA variants.
View on arXiv@article{zhang2025_2505.18877, title={ RefLoRA: Refactored Low-Rank Adaptation for Efficient Fine-Tuning of Large Models }, author={ Yilang Zhang and Bingcong Li and Georgios B. Giannakis }, journal={arXiv preprint arXiv:2505.18877}, year={ 2025 } }