Shortcut-connected Expert Parallelism for Accelerating Mixture-of-Experts

Expert parallelism has emerged as a key strategy for distributing the computational workload of sparsely-gated mixture-of-experts (MoE) models across multiple devices, enabling the processing of increasingly large-scale models. However, the All-to-All communication inherent to expert parallelism poses a significant bottleneck, limiting the efficiency of MoE models. Although existing optimization methods partially mitigate this issue, they remain constrained by the sequential dependency between communication and computation operations. To address this challenge, we propose ScMoE, a novel shortcut-connected MoE architecture integrated with an overlapping parallelization strategy. ScMoE decouples communication from its conventional sequential ordering, enabling up to 100% overlap with computation. Compared to the prevalent top-2 MoE baseline, ScMoE achieves speedups of 1.49 times in training and 1.82 times in inference. Moreover, our experiments and analyses indicate that ScMoE not only achieves comparable but in some instances surpasses the model quality of existing approaches.
View on arXiv@article{cai2025_2404.05019, title={ Shortcut-connected Expert Parallelism for Accelerating Mixture-of-Experts }, author={ Weilin Cai and Juyong Jiang and Le Qin and Junwei Cui and Sunghun Kim and Jiayi Huang }, journal={arXiv preprint arXiv:2404.05019}, year={ 2025 } }