ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.04959
75
4

No Task Left Behind: Isotropic Model Merging with Common and Task-Specific Subspaces

7 February 2025
Daniel Marczak
Simone Magistri
Sebastian Cygert
Bartłomiej Twardowski
Andrew D. Bagdanov
Joost van de Weijer
    MoMe
ArXivPDFHTML
Abstract

Model merging integrates the weights of multiple task-specific models into a single multi-task model. Despite recent interest in the problem, a significant performance gap between the combined and single-task models remains. In this paper, we investigate the key characteristics of task matrices -- weight update matrices applied to a pre-trained model -- that enable effective merging. We show that alignment between singular components of task-specific and merged matrices strongly correlates with performance improvement over the pre-trained model. Based on this, we propose an isotropic merging framework that flattens the singular value spectrum of task matrices, enhances alignment, and reduces the performance gap. Additionally, we incorporate both common and task-specific subspaces to further improve alignment and performance. Our proposed approach achieves state-of-the-art performance on vision and language tasks across various sets of tasks and model scales. This work advances the understanding of model merging dynamics, offering an effective methodology to merge models without requiring additional training. Code is available atthis https URL.

View on arXiv
@article{marczak2025_2502.04959,
  title={ No Task Left Behind: Isotropic Model Merging with Common and Task-Specific Subspaces },
  author={ Daniel Marczak and Simone Magistri and Sebastian Cygert and Bartłomiej Twardowski and Andrew D. Bagdanov and Joost van de Weijer },
  journal={arXiv preprint arXiv:2502.04959},
  year={ 2025 }
}
Comments on this paper