9
1

Decouple and Orthogonalize: A Data-Free Framework for LoRA Merging

Abstract

With more open-source models available for diverse tasks, model merging has gained attention by combining models into one, reducing training, storage, and inference costs. Current research mainly focuses on model merging for full fine-tuning, overlooking the popular LoRA. However, our empirical analysis reveals that: a) existing merging methods designed for full fine-tuning perform poorly on LoRA; b) LoRA modules show much larger parameter magnitude variance than full fine-tuned weights; c) greater parameter magnitude variance correlates with worse merging performance. Considering that large magnitude variances cause deviations in the distribution of the merged parameters, resulting in information loss and performance degradation, we propose a Decoupled and Orthogonal merging approach(DO-Merging). By separating parameters into magnitude and direction components and merging them independently, we reduce the impact of magnitude differences on the directional alignment of the merged models, thereby preserving task information. Furthermore, we introduce a data-free, layer-wise gradient descent method with orthogonal constraints to mitigate interference during the merging of direction components. We provide theoretical guarantees for both the decoupling and orthogonal components. And we validate through extensive experiments across vision, language, and multi-modal domains that our proposed DO-Merging can achieve significantly higher performance than existing merging methods at a minimal cost. Notably, each component can be flexibly integrated with existing methods, offering near free-lunch improvements across tasks.

View on arXiv
@article{zheng2025_2505.15875,
  title={ Decouple and Orthogonalize: A Data-Free Framework for LoRA Merging },
  author={ Shenghe Zheng and Hongzhi Wang and Chenyu Huang and Xiaohui Wang and Tao Chen and Jiayuan Fan and Shuyue Hu and Peng Ye },
  journal={arXiv preprint arXiv:2505.15875},
  year={ 2025 }
}
Comments on this paper