ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.02658
42
0

MiLo: Efficient Quantized MoE Inference with Mixture of Low-Rank Compensators

3 April 2025
Beichen Huang
Yueming Yuan
Zelei Shao
Minjia Zhang
    MQ
    MoE
ArXivPDFHTML
Abstract

A critical approach for efficiently deploying Mixture-of-Experts (MoE) models with massive parameters is quantization. However, state-of-the-art MoE models suffer from non-negligible accuracy loss with extreme quantization, such as under 4 bits. To address this, we introduce MiLo, a novel method that augments highly quantized MoEs with a mixture of low-rank compensators. These compensators consume only a small amount of additional memory but significantly recover accuracy loss from extreme quantization. MiLo also identifies that MoEmodels exhibit distinctive characteristics across weights due to their hybrid dense-sparse architectures, and employs adaptive rank selection policies along with iterative optimizations to close the accuracy gap. MiLo does not rely on calibration data, allowing it to generalize to different MoE models and datasets without overfitting to a calibration set. To avoid the hardware inefficiencies of extreme quantization, such as 3-bit, MiLo develops Tensor Core-friendly 3-bit kernels, enabling measured latency speedups on 3-bit quantized MoE models. Our evaluation shows that MiLo outperforms existing methods on SoTA MoE models across various tasks.

View on arXiv
@article{huang2025_2504.02658,
  title={ MiLo: Efficient Quantized MoE Inference with Mixture of Low-Rank Compensators },
  author={ Beichen Huang and Yueming Yuan and Zelei Shao and Minjia Zhang },
  journal={arXiv preprint arXiv:2504.02658},
  year={ 2025 }
}
Comments on this paper