Subspace-Boosted Model Merging
- MoMe

Model merging enables the combination of multiple specialized expert models into a single model capable of performing multiple tasks. However, the benefits of merging an increasing amount of specialized experts generally lead to diminishing returns and reduced overall performance gains. In this work, we offer an explanation and analysis from a task arithmetic perspective; revealing that as the merging process (across numerous existing merging methods) continues for more and more experts, the associated task vector space experiences rank collapse. To mitigate this issue, we introduce Subspace Boosting, which operates on the singular value decomposed task vector space and maintains task vector ranks. Subspace Boosting raises merging efficacy for up to 20 expert models by large margins of more than 10% when evaluated on vision benchmarks. Moreover, we propose employing Higher-Order Generalized Singular Value Decomposition to further quantify task similarity, offering a new interpretable perspective on model merging.
View on arXiv@article{skorobogat2025_2506.16506, title={ Subspace-Boosted Model Merging }, author={ Ronald Skorobogat and Karsten Roth and Mariana-Iuliana Georgescu and Zeynep Akata }, journal={arXiv preprint arXiv:2506.16506}, year={ 2025 } }