ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.22694
25
0

MoRE: A Mixture of Low-Rank Experts for Adaptive Multi-Task Learning

28 May 2025
Dacao Zhang
Kun Zhang
Shimao Chu
Le Wu
Xin Li
Si Wei
    MoEALMOffRL
ArXiv (abs)PDFHTML
Main:8 Pages
7 Figures
Bibliography:3 Pages
7 Tables
Appendix:3 Pages
Abstract

With the rapid development of Large Language Models (LLMs), Parameter-Efficient Fine-Tuning (PEFT) methods have gained significant attention, which aims to achieve efficient fine-tuning of LLMs with fewer parameters. As a representative PEFT method, Low-Rank Adaptation (LoRA) introduces low-rank matrices to approximate the incremental tuning parameters and achieves impressive performance over multiple scenarios. After that, plenty of improvements have been proposed for further improvement. However, these methods either focus on single-task scenarios or separately train multiple LoRA modules for multi-task scenarios, limiting the efficiency and effectiveness of LoRA in multi-task scenarios. To better adapt to multi-task fine-tuning, in this paper, we propose a novel Mixture of Low-Rank Experts (MoRE) for multi-task PEFT. Specifically, instead of using an individual LoRA for each task, we align different ranks of LoRA module with different tasks, which we named low-rank experts. Moreover, we design a novel adaptive rank selector to select the appropriate expert for each task. By jointly training low-rank experts, MoRE can enhance the adaptability and efficiency of LoRA in multi-task scenarios. Finally, we conduct extensive experiments over multiple multi-task benchmarks along with different LLMs to verify model performance. Experimental results demonstrate that compared to traditional LoRA and its variants, MoRE significantly improves the performance of LLMs in multi-task scenarios and incurs no additional inference cost. We also release the model and code to facilitate the community.

View on arXiv
@article{zhang2025_2505.22694,
  title={ MoRE: A Mixture of Low-Rank Experts for Adaptive Multi-Task Learning },
  author={ Dacao Zhang and Kun Zhang and Shimao Chu and Le Wu and Xin Li and Si Wei },
  journal={arXiv preprint arXiv:2505.22694},
  year={ 2025 }
}
Comments on this paper