Optimizing MoE Routers: Design, Implementation, and Evaluation in Transformer Models
- MoE

Mixture of Experts (MoE) architectures increase large language model scalability, yet their performance depends on the router module that moves tokens to specialized experts. Bad routing can load imbalance and reduced accuracy. This project designed and implemented different router architectures within Transformer models to fix these limitations. We experimented with six distinct router variants Linear, Attention, Multi-Layer Perceptron (MLP), Hybrid, Hash, and our new MLP-Hadamard. We characterized these routers using BERT and the Qwen1.5-MoE model, looking at parameter efficiency, inference latency, routing entropy, and expert utilization patterns. Our evaluations showed distinct trade-offs: Linear routers offer speed, while MLP and Attention routers provide greater expressiveness. The MLP-Hadamard router shows a unique capability for structured, sparse routing. We successfully replaced and fine-tuned custom routers within the complex, quantized Qwen1.5-MoE model. This work provides a comparative analysis of MoE router designs and offers insights into optimizing their performance for efficient and effective large-scale model deployment.
View on arXiv@article{harvey2025_2506.16419, title={ Optimizing MoE Routers: Design, Implementation, and Evaluation in Transformer Models }, author={ Daniel Fidel Harvey and George Weale and Berk Yilmaz }, journal={arXiv preprint arXiv:2506.16419}, year={ 2025 } }