ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2201.10890
27
20

One Student Knows All Experts Know: From Sparse to Dense

26 January 2022
Fuzhao Xue
Xiaoxin He
Xiaozhe Ren
Yuxuan Lou
Yang You
    MoMe
    MoE
ArXivPDFHTML
Abstract

Human education system trains one student by multiple experts. Mixture-of-experts (MoE) is a powerful sparse architecture including multiple experts. However, sparse MoE model is easy to overfit, hard to deploy, and not hardware-friendly for practitioners. In this work, inspired by the human education model, we propose a novel task, knowledge integration, to obtain a dense student model (OneS) as knowledgeable as one sparse MoE. We investigate this task by proposing a general training framework including knowledge gathering and knowledge distillation. Specifically, to gather key knowledge from different pre-trained experts, we first investigate four different possible knowledge gathering methods, \ie summation, averaging, Top-K Knowledge Gathering (Top-KG), and Singular Value Decomposition Knowledge Gathering (SVD-KG) proposed in this paper. We then refine the dense student model by knowledge distillation to offset the noise from gathering. On ImageNet, our OneS preserves 61.7%61.7\%61.7% benefits from MoE and achieves 78.4%78.4\%78.4% top-1 accuracy ImageNet with only 151515M parameters. On four natural language processing datasets, OneS obtains 88.2%88.2\%88.2% MoE benefits and outperforms the best baseline by 51.7%51.7\%51.7% using the same architecture and training data. In addition, compared with the MoE counterpart, OneS can achieve 3.7×3.7 \times3.7× inference speedup due to less computation and hardware-friendly architecture.

View on arXiv
Comments on this paper