26
0

μμ-MoE: Test-Time Pruning as Micro-Grained Mixture-of-Experts

Main:4 Pages
4 Figures
Bibliography:3 Pages
5 Tables
Appendix:3 Pages
Abstract

To tackle the huge computational demand of large foundation models, activation-aware compression techniques without retraining have been introduced. However, since these rely on calibration data, domain shift may arise for unknown downstream tasks. With a computationally efficient calibration, activation-aware pruning can be executed for every prompt adaptively, yet achieving reduced complexity at inference. We formulate it as a mixture of micro-experts, called μ\mu-MoE. Several experiments demonstrate that μ\mu-MoE can dynamically adapt to task/prompt-dependent structured sparsity on the fly.

View on arXiv
@article{koike-akino2025_2505.18451,
  title={ $μ$-MoE: Test-Time Pruning as Micro-Grained Mixture-of-Experts },
  author={ Toshiaki Koike-Akino and Jing Liu and Ye Wang },
  journal={arXiv preprint arXiv:2505.18451},
  year={ 2025 }
}
Comments on this paper