ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2408.11304
  4. Cited By
FedMoE: Personalized Federated Learning via Heterogeneous Mixture of
  Experts

FedMoE: Personalized Federated Learning via Heterogeneous Mixture of Experts

21 August 2024
Hanzi Mei
Dongqi Cai
Ao Zhou
Shangguang Wang
Mengwei Xu
    MoE
ArXivPDFHTML

Papers citing "FedMoE: Personalized Federated Learning via Heterogeneous Mixture of Experts"

5 / 5 papers shown
Title
FedADP: Unified Model Aggregation for Federated Learning with Heterogeneous Model Architectures
FedADP: Unified Model Aggregation for Federated Learning with Heterogeneous Model Architectures
Jiacheng Wang
Hongtao Lv
Lei Liu
FedML
25
0
0
10 May 2025
MoQa: Rethinking MoE Quantization with Multi-stage Data-model Distribution Awareness
MoQa: Rethinking MoE Quantization with Multi-stage Data-model Distribution Awareness
Zihao Zheng
Xiuping Cui
Size Zheng
Maoliang Li
Jiayu Chen
Yun Liang
Xiang Chen
MQ
MoE
69
0
0
27 Mar 2025
Personalized Federated Fine-Tuning for LLMs via Data-Driven Heterogeneous Model Architectures
Personalized Federated Fine-Tuning for LLMs via Data-Driven Heterogeneous Model Architectures
Yicheng Zhang
Zhen Qin
Zhaomin Wu
Jian Hou
Shuiguang Deng
85
2
0
28 Nov 2024
Scalable and Efficient MoE Training for Multitask Multilingual Models
Scalable and Efficient MoE Training for Multitask Multilingual Models
Young Jin Kim
A. A. Awan
Alexandre Muzio
Andres Felipe Cruz Salinas
Liyang Lu
Amr Hendy
Samyam Rajbhandari
Yuxiong He
Hany Awadalla
MoE
104
84
0
22 Sep 2021
Scaling Laws for Neural Language Models
Scaling Laws for Neural Language Models
Jared Kaplan
Sam McCandlish
T. Henighan
Tom B. Brown
B. Chess
R. Child
Scott Gray
Alec Radford
Jeff Wu
Dario Amodei
264
4,532
0
23 Jan 2020
1