39

Mixture-of-Experts under Finite-Rate Gating: Communication--Generalization Trade-offs

Ali Khalesi
Mohammad Reza Deylam Salehi
Main:4 Pages
3 Figures
Bibliography:1 Pages
Appendix:1 Pages
Abstract

Mixture-of-Experts (MoE) architectures decompose prediction tasks into specialized expert sub-networks selected by a gating mechanism. This letter adopts a communication-theoretic view of MoE gating, modeling the gate as a stochastic channel operating under a finite information rate. Within an information-theoretic learning framework, {we specialize a mutual-information generalization bound and develop a rate-distortion characterization D(Rg)D(R_g) of finite-rate gating, where Rg:=I(X;T)R_g:=I(X; T), yielding (under a standard empirical rate-distortion optimality condition) E[R(W)]D(Rg)+δm+(2/m)I(S;W)\mathbb{E}[R(W)] \le D(R_g)+\delta_m+\sqrt{(2/m)\, I(S; W)}. }The analysis yields capacity-aware limits for communication-constrained MoE systems, and numerical simulations on synthetic multi-expert models empirically confirm the predicted trade-offs between gating rate, expressivity, and generalization.

View on arXiv
Comments on this paper