The power of fine-grained experts: Granularity boosts expressivity in Mixture of Experts

Abstract
Mixture-of-Experts (MoE) layers are increasingly central to frontier model architectures. By selectively activating parameters, they reduce computational cost while scaling total parameter count. This paper investigates the impact of the number of active experts, termed granularity, comparing architectures with many (e.g., 8 per layer in DeepSeek) to those with fewer (e.g., 1 per layer in Llama-4 models). We prove an exponential separation in network expressivity based on this design parameter, suggesting that models benefit from higher granularity. Experimental results corroborate our theoretical findings and illustrate this separation.
View on arXiv@article{boix-adsera2025_2505.06839, title={ The power of fine-grained experts: Granularity boosts expressivity in Mixture of Experts }, author={ Enric Boix-Adsera and Philippe Rigollet }, journal={arXiv preprint arXiv:2505.06839}, year={ 2025 } }
Comments on this paper