ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.17553
37
0

CoMoE: Contrastive Representation for Mixture-of-Experts in Parameter-Efficient Fine-tuning

23 May 2025
Jinyuan Feng
Chaopeng Wei
Tenghai Qiu
Tianyi Hu
Zhiqiang Pu
    MoE
ArXivPDFHTML
Abstract

In parameter-efficient fine-tuning, mixture-of-experts (MoE), which involves specializing functionalities into different experts and sparsely activating them appropriately, has been widely adopted as a promising approach to trade-off between model capacity and computation overhead. However, current MoE variants fall short on heterogeneous datasets, ignoring the fact that experts may learn similar knowledge, resulting in the underutilization of MoE's capacity. In this paper, we propose Contrastive Representation for MoE (CoMoE), a novel method to promote modularization and specialization in MoE, where the experts are trained along with a contrastive objective by sampling from activated and inactivated experts in top-k routing. We demonstrate that such a contrastive objective recovers the mutual-information gap between inputs and the two types of experts. Experiments on several benchmarks and in multi-task settings demonstrate that CoMoE can consistently enhance MoE's capacity and promote modularization among the experts.

View on arXiv
@article{feng2025_2505.17553,
  title={ CoMoE: Contrastive Representation for Mixture-of-Experts in Parameter-Efficient Fine-tuning },
  author={ Jinyuan Feng and Chaopeng Wei and Tenghai Qiu and Tianyi Hu and Zhiqiang Pu },
  journal={arXiv preprint arXiv:2505.17553},
  year={ 2025 }
}
Comments on this paper