3

Understanding Model Merging: A Unified Generalization Framework for Heterogeneous Experts

Qinglun Li
Anke Tang
Miao Zhang
Mengzhu Wang
Quanjun Yin
Li Shen
Main:8 Pages
8 Figures
2 Tables
Appendix:24 Pages
Abstract

Model merging efficiently aggregates capabilities from multiple fine-tuned models into a single one, operating purely in parameter space without original data or expensive re-computation. Despite empirical successes, a unified theory for its effectiveness under heterogeneous finetuning hyperparameters (e.g., varying learning rates, batch sizes) remains missing. Moreover, the lack of hyperparameter transparency in open-source fine-tuned models makes it difficult to predict merged-model performance, leaving practitioners without guidance on how to fine-tune merge-friendly experts. To address those two challenges, we employ L2L_2-Stability theory under heterogeneous hyperparameter environments to analyze the generalization of the merged model xavg\boldsymbol{x}_{avg}. This pioneering analysis yields two key contributions: (i) \textit{A unified theoretical framework} is provided to explain existing merging algorithms, revealing how they optimize specific terms in our bound, thus offering a strong theoretical foundation for empirical observations. (ii) \textit{Actionable recommendations} are proposed for practitioners to strategically fine-tune expert models, enabling the construction of merge-friendly models within the pretraining-to-finetuning pipeline. Extensive experiments on the ResNet/Vit family across 20/8 visual classification tasks, involving thousands of finetuning models, robustly confirm the impact of different hyperparameters on the generalization of xavg\boldsymbol{x}_{avg} predicted by our theoretical results.

View on arXiv
Comments on this paper