Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2207.00586
Cited By
PrUE: Distilling Knowledge from Sparse Teacher Networks
3 July 2022
Shaopu Wang
Xiaojun Chen
Mengzhen Kou
Jinqiao Shi
Re-assign community
ArXiv
PDF
HTML
Papers citing
"PrUE: Distilling Knowledge from Sparse Teacher Networks"
3 / 3 papers shown
Title
ReffAKD: Resource-efficient Autoencoder-based Knowledge Distillation
Divyang Doshi
Jung-Eun Kim
31
1
0
15 Apr 2024
Large scale distributed neural network training through online distillation
Rohan Anil
Gabriel Pereyra
Alexandre Passos
Róbert Ormándi
George E. Dahl
Geoffrey E. Hinton
FedML
278
404
0
09 Apr 2018
Aggregated Residual Transformations for Deep Neural Networks
Saining Xie
Ross B. Girshick
Piotr Dollár
Zhuowen Tu
Kaiming He
306
10,233
0
16 Nov 2016
1