ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.14435
7
0

MoTE: Mixture of Ternary Experts for Memory-efficient Large Multimodal Models

17 June 2025
Hongyu Wang
Jiayu Xu
Ruiping Wang
Yan Feng
Yitao Zhai
Peng Pei
Xunliang Cai
Xilin Chen
    MoE
ArXiv (abs)PDFHTML
Main:9 Pages
5 Figures
Bibliography:5 Pages
12 Tables
Appendix:8 Pages
Abstract

Large multimodal Mixture-of-Experts (MoEs) effectively scale the model size to boost performance while maintaining fixed active parameters. However, previous works primarily utilized full-precision experts during sparse up-cycling. Despite they show superior performance on end tasks, the large amount of experts introduces higher memory footprint, which poses significant challenges for the deployment on edge devices. In this work, we propose MoTE, a scalable and memory-efficient approach to train Mixture-of-Ternary-Experts models from dense checkpoint. Instead of training fewer high-precision experts, we propose to train more low-precision experts during up-cycling. Specifically, we use the pre-trained FFN as a shared expert and train ternary routed experts with parameters in {-1, 0, 1}. Extensive experiments show that our approach has promising scaling trend along model size. MoTE achieves comparable performance to full-precision baseline MoE-LLaVA while offering lower memory footprint. Furthermore, our approach is compatible with post-training quantization methods and the advantage further amplifies when memory-constraint goes lower. Given the same amount of expert memory footprint of 3.4GB and combined with post-training quantization, MoTE outperforms MoE-LLaVA by a gain of 4.3% average accuracy on end tasks, demonstrating its effectiveness and potential for memory-constrained devices.

View on arXiv
@article{wang2025_2506.14435,
  title={ MoTE: Mixture of Ternary Experts for Memory-efficient Large Multimodal Models },
  author={ Hongyu Wang and Jiayu Xu and Ruiping Wang and Yan Feng and Yitao Zhai and Peng Pei and Xunliang Cai and Xilin Chen },
  journal={arXiv preprint arXiv:2506.14435},
  year={ 2025 }
}
Comments on this paper