ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2403.11646
77
3

MedMerge: Merging Models for Effective Transfer Learning to Medical Imaging Tasks

18 March 2024
Ibrahim Almakky
Santosh Sanjeev
Anees Ur Rehman Hashmi
Mohammad Areeb Qazi
Mohammad Yaqub
Mohammad Yaqub
    FedML
    MoMe
ArXivPDFHTML
Abstract

Transfer learning has become a powerful tool to initialize deep learning models to achieve faster convergence and higher performance. This is especially useful in the medical imaging analysis domain, where data scarcity limits possible performance gains for deep learning models. Some advancements have been made in boosting the transfer learning performance gain by merging models starting from the same initialization. However, in the medical imaging analysis domain, there is an opportunity to merge models starting from different initializations, thus combining the features learned from different tasks. In this work, we propose MedMerge, a method whereby the weights of different models can be merged, and their features can be effectively utilized to boost performance on a new task. With MedMerge, we learn kernel-level weights that can later be used to merge the models into a single model, even when starting from different initializations. Testing on various medical imaging analysis tasks, we show that our merged model can achieve significant performance gains, with up to 7% improvement on the F1 score. The code implementation of this work is available atthis http URL.

View on arXiv
@article{almakky2025_2403.11646,
  title={ MedMerge: Merging Models for Effective Transfer Learning to Medical Imaging Tasks },
  author={ Ibrahim Almakky and Santosh Sanjeev and Anees Ur Rehman Hashmi and Mohammad Areeb Qazi and Hu Wang and Mohammad Yaqub },
  journal={arXiv preprint arXiv:2403.11646},
  year={ 2025 }
}
Comments on this paper