ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.20633
75
0

Enhancing Multi-modal Models with Heterogeneous MoE Adapters for Fine-tuning

26 March 2025
Sashuai Zhou
Hai Huang
Yan Xia
    MoMe
    MoE
ArXivPDFHTML
Abstract

Multi-modal models excel in cross-modal tasks but are computationally expensive due to their billions of parameters. Parameter-efficient fine-tuning (PEFT) offers a solution by adding small trainable components while freezing pre-trained parameters. However, existing methods primarily focus on uni-modal processing, overlooking the critical modal fusion needed for multi-modal tasks. To fill this gap, we propose heterogeneous mixture of experts adapters that extend the traditional PEFT framework to support multi-modal expert combinations and improve information interaction. Additionally, our approach modifies the affine linear expert design to enable efficient modal fusion in a low-rank space, achieving competitive performance with only 5-8\% of the parameters fine-tuned. Experiments across eight downstream tasks, including visual-audio and text-visual, demonstrate the superior performance of the approach.

View on arXiv
@article{zhou2025_2503.20633,
  title={ Enhancing Multi-modal Models with Heterogeneous MoE Adapters for Fine-tuning },
  author={ Sashuai Zhou and Hai Huang and Yan Xia },
  journal={arXiv preprint arXiv:2503.20633},
  year={ 2025 }
}
Comments on this paper