ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.00354
62
0

PM-MOE: Mixture of Experts on Private Model Parameters for Personalized Federated Learning

1 February 2025
Yu Feng
Yangli-ao Geng
Yifan Zhu
Zongfu Han
Xie Yu
Kaiwen Xue
Haoran Luo
Mengyang Sun
Guangwei Zhang
Meina Song
    FedML
    MoE
ArXivPDFHTML
Abstract

Federated learning (FL) has gained widespread attention for its privacy-preserving and collaborative learning capabilities. Due to significant statistical heterogeneity, traditional FL struggles to generalize a shared model across diverse data domains. Personalized federated learning addresses this issue by dividing the model into a globally shared part and a locally private part, with the local model correcting representation biases introduced by the global model. Nevertheless, locally converged parameters more accurately capture domain-specific knowledge, and current methods overlook the potential benefits of these parameters. To address these limitations, we propose PM-MoE architecture. This architecture integrates a mixture of personalized modules and an energy-based personalized modules denoising, enabling each client to select beneficial personalized parameters from other clients. We applied the PM-MoE architecture to nine recent model-split-based personalized federated learning algorithms, achieving performance improvements with minimal additional training. Extensive experiments on six widely adopted datasets and two heterogeneity settings validate the effectiveness of our approach. The source code is available at \url{this https URL}.

View on arXiv
@article{feng2025_2502.00354,
  title={ PM-MOE: Mixture of Experts on Private Model Parameters for Personalized Federated Learning },
  author={ Yu Feng and Yangli-ao Geng and Yifan Zhu and Zongfu Han and Xie Yu and Kaiwen Xue and Haoran Luo and Mengyang Sun and Guangwei Zhang and Meina Song },
  journal={arXiv preprint arXiv:2502.00354},
  year={ 2025 }
}
Comments on this paper