Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2108.11798
Cited By
Efficient training of lightweight neural networks using Online Self-Acquired Knowledge Distillation
26 August 2021
Maria Tzelepi
Anastasios Tefas
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Efficient training of lightweight neural networks using Online Self-Acquired Knowledge Distillation"
11 / 11 papers shown
Title
Feature Fusion for Online Mutual Knowledge Distillation
Jangho Kim
Minsung Hyun
Inseop Chung
Nojun Kwak
FedML
52
91
0
19 Apr 2019
Self-Referenced Deep Learning
Xu Lan
Xiatian Zhu
S. Gong
104
24
0
19 Nov 2018
Knowledge Distillation by On-the-Fly Native Ensemble
Xu Lan
Xiatian Zhu
S. Gong
267
475
0
12 Jun 2018
Born Again Neural Networks
Tommaso Furlanello
Zachary Chase Lipton
Michael Tschannen
Laurent Itti
Anima Anandkumar
63
1,030
0
12 May 2018
Large scale distributed neural network training through online distillation
Rohan Anil
Gabriel Pereyra
Alexandre Passos
Róbert Ormándi
George E. Dahl
Geoffrey E. Hinton
FedML
314
405
0
09 Apr 2018
A Survey of Model Compression and Acceleration for Deep Neural Networks
Yu Cheng
Duo Wang
Pan Zhou
Zhang Tao
59
1,094
0
23 Oct 2017
Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms
Han Xiao
Kashif Rasul
Roland Vollgraf
176
8,807
0
25 Aug 2017
Deep Mutual Learning
Ying Zhang
Tao Xiang
Timothy M. Hospedales
Huchuan Lu
FedML
125
1,647
0
01 Jun 2017
Deep Residual Learning for Image Recognition
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
MedIm
1.4K
192,638
0
10 Dec 2015
Distilling the Knowledge in a Neural Network
Geoffrey E. Hinton
Oriol Vinyals
J. Dean
FedML
243
19,523
0
09 Mar 2015
Do Deep Nets Really Need to be Deep?
Lei Jimmy Ba
R. Caruana
151
2,114
0
21 Dec 2013
1