ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.23094
51
0

MAP: Revisiting Weight Decomposition for Low-Rank Adaptation

29 May 2025
Chongjie Si
Zhiyi Shi
Yadao Wang
Xiaokang Yang
Susanto Rahardja
Wei Shen
ArXiv (abs)PDFHTML
Main:8 Pages
2 Figures
Bibliography:3 Pages
8 Tables
Appendix:2 Pages
Abstract

The rapid development of large language models has revolutionized natural language processing, but their fine-tuning remains computationally expensive, hindering broad deployment. Parameter-efficient fine-tuning (PEFT) methods, such as LoRA, have emerged as solutions. Recent work like DoRA attempts to further decompose weight adaptation into direction and magnitude components. However, existing formulations often define direction heuristically at the column level, lacking a principled geometric foundation. In this paper, we propose MAP, a novel framework that reformulates weight matrices as high-dimensional vectors and decouples their adaptation into direction and magnitude in a rigorous manner. MAP normalizes the pre-trained weights, learns a directional update, and introduces two scalar coefficients to independently scale the magnitude of the base and update vectors. This design enables more interpretable and flexible adaptation, and can be seamlessly integrated into existing PEFT methods. Extensive experiments show that MAP significantly improves performance when coupling with existing methods, offering a simple yet powerful enhancement to existing PEFT methods. Given the universality and simplicity of MAP, we hope it can serve as a default setting for designing future PEFT methods.

View on arXiv
@article{si2025_2505.23094,
  title={ MAP: Revisiting Weight Decomposition for Low-Rank Adaptation },
  author={ Chongjie Si and Zhiyi Shi and Yadao Wang and Xiaokang Yang and Susanto Rahardja and Wei Shen },
  journal={arXiv preprint arXiv:2505.23094},
  year={ 2025 }
}
Comments on this paper