ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2207.00586
  4. Cited By
PrUE: Distilling Knowledge from Sparse Teacher Networks

PrUE: Distilling Knowledge from Sparse Teacher Networks

3 July 2022
Shaopu Wang
Xiaojun Chen
Mengzhen Kou
Jinqiao Shi
ArXivPDFHTML

Papers citing "PrUE: Distilling Knowledge from Sparse Teacher Networks"

3 / 3 papers shown
Title
ReffAKD: Resource-efficient Autoencoder-based Knowledge Distillation
ReffAKD: Resource-efficient Autoencoder-based Knowledge Distillation
Divyang Doshi
Jung-Eun Kim
31
1
0
15 Apr 2024
Large scale distributed neural network training through online
  distillation
Large scale distributed neural network training through online distillation
Rohan Anil
Gabriel Pereyra
Alexandre Passos
Róbert Ormándi
George E. Dahl
Geoffrey E. Hinton
FedML
278
404
0
09 Apr 2018
Aggregated Residual Transformations for Deep Neural Networks
Aggregated Residual Transformations for Deep Neural Networks
Saining Xie
Ross B. Girshick
Piotr Dollár
Zhuowen Tu
Kaiming He
306
10,233
0
16 Nov 2016
1