ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.05928
41
0

MoA: Heterogeneous Mixture of Adapters for Parameter-Efficient Fine-Tuning of Large Language Models

6 June 2025
Jie Cao
Tianwei Lin
Hongyang He
Rolan Yan
Wenqiao Zhang
Juncheng Billy Li
D. Zhang
Siliang Tang
Yueting Zhuang
    MoE
ArXiv (abs)PDFHTML
Main:8 Pages
9 Figures
Bibliography:3 Pages
5 Tables
Appendix:5 Pages
Abstract

Recent studies integrate Low-Rank Adaptation (LoRA) and Mixture-of-Experts (MoE) to further enhance the performance of parameter-efficient fine-tuning (PEFT) methods in Large Language Model (LLM) applications. Existing methods employ \emph{homogeneous} MoE-LoRA architectures composed of LoRA experts with either similar or identical structures and capacities. However, these approaches often suffer from representation collapse and expert load imbalance, which negatively impact the potential of LLMs. To address these challenges, we propose a \emph{heterogeneous} \textbf{Mixture-of-Adapters (MoA)} approach. This method dynamically integrates PEFT adapter experts with diverse structures, leveraging their complementary representational capabilities to foster expert specialization, thereby enhancing the effective transfer of pre-trained knowledge to downstream tasks. MoA supports two variants: \textbf{(i)} \textit{Soft MoA} achieves fine-grained integration by performing a weighted fusion of all expert outputs; \textbf{(ii)} \textit{Sparse MoA} activates adapter experts sparsely based on their contribution, achieving this with negligible performance degradation. Experimental results demonstrate that heterogeneous MoA outperforms homogeneous MoE-LoRA methods in both performance and parameter efficiency. Our project is available atthis https URL.

View on arXiv
@article{cao2025_2506.05928,
  title={ MoA: Heterogeneous Mixture of Adapters for Parameter-Efficient Fine-Tuning of Large Language Models },
  author={ Jie Cao and Tianwei Lin and Hongyang He and Rolan Yan and Wenqiao Zhang and Juncheng Li and Dongping Zhang and Siliang Tang and Yueting Zhuang },
  journal={arXiv preprint arXiv:2506.05928},
  year={ 2025 }
}
Comments on this paper