Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2006.05525
Cited By
v1
v2
v3
v4
v5
v6
v7 (latest)
Knowledge Distillation: A Survey
9 June 2020
Jianping Gou
B. Yu
Stephen J. Maybank
Dacheng Tao
VLM
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Knowledge Distillation: A Survey"
28 / 328 papers shown
Title
Doubly Convolutional Neural Networks
Shuangfei Zhai
Yu Cheng
Weining Lu
Zhongfei Zhang
OOD
3DV
52
63
0
30 Oct 2016
Deep Model Compression: Distilling Knowledge from Noisy Teachers
Bharat Bhusan Sau
V. Balasubramanian
59
181
0
30 Oct 2016
Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data
Nicolas Papernot
Martín Abadi
Ulfar Erlingsson
Ian Goodfellow
Kunal Talwar
94
1,020
0
18 Oct 2016
Xception: Deep Learning with Depthwise Separable Convolutions
François Chollet
MDE
BDL
PINN
1.4K
14,608
0
07 Oct 2016
Distilling an Ensemble of Greedy Dependency Parsers into One MST Parser
A. Kuncoro
Miguel Ballesteros
Lingpeng Kong
Chris Dyer
Noah A. Smith
MoE
65
77
0
24 Sep 2016
Densely Connected Convolutional Networks
Gao Huang
Zhuang Liu
Laurens van der Maaten
Kilian Q. Weinberger
PINN
3DV
793
36,881
0
25 Aug 2016
Knowledge Distillation for Small-footprint Highway Networks
Liang Lu
Michelle Guo
Steve Renals
70
73
0
02 Aug 2016
Learning without Forgetting
Zhizhong Li
Derek Hoiem
CLL
OOD
SSL
308
4,428
0
29 Jun 2016
Sequence-Level Knowledge Distillation
Yoon Kim
Alexander M. Rush
130
1,122
0
25 Jun 2016
Adapting Models to Signal Degradation using Distillation
Jong-Chyi Su
Subhransu Maji
76
31
0
01 Apr 2016
Quantized Convolutional Neural Networks for Mobile Devices
Jiaxiang Wu
Cong Leng
Yuhang Wang
Qinghao Hu
Jian Cheng
MQ
101
1,167
0
21 Dec 2015
Deep Residual Learning for Image Recognition
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
MedIm
2.2K
194,426
0
10 Dec 2015
Net2Net: Accelerating Learning via Knowledge Transfer
Tianqi Chen
Ian Goodfellow
Jonathon Shlens
183
672
0
18 Nov 2015
Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks
Nicolas Papernot
Patrick McDaniel
Xi Wu
S. Jha
A. Swami
AAML
113
3,076
0
14 Nov 2015
Unifying distillation and privileged information
David Lopez-Paz
Léon Bottou
Bernhard Schölkopf
V. Vapnik
FedML
171
463
0
11 Nov 2015
BinaryConnect: Training Deep Neural Networks with binary weights during propagations
Matthieu Courbariaux
Yoshua Bengio
J. David
MQ
212
2,992
0
02 Nov 2015
Structured Transforms for Small-Footprint Deep Learning
Vikas Sindhwani
Tara N. Sainath
Sanjiv Kumar
68
240
0
06 Oct 2015
Cross Modal Distillation for Supervision Transfer
Saurabh Gupta
Judy Hoffman
Jitendra Malik
120
538
0
02 Jul 2015
Distilling Word Embeddings: An Encoding Approach
Lili Mou
Ran Jia
Yan Xu
Ge Li
Lu Zhang
Zhi Jin
FedML
79
27
0
15 Jun 2015
Learning both Weights and Connections for Efficient Neural Networks
Song Han
Jeff Pool
J. Tran
W. Dally
CVBM
313
6,700
0
08 Jun 2015
Transferring Knowledge from a RNN to a DNN
William Chan
Nan Rosemary Ke
Ian Lane
65
75
0
07 Apr 2015
Distilling the Knowledge in a Neural Network
Geoffrey E. Hinton
Oriol Vinyals
J. Dean
FedML
367
19,733
0
09 Mar 2015
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Sergey Ioffe
Christian Szegedy
OOD
465
43,341
0
11 Feb 2015
FitNets: Hints for Thin Deep Nets
Adriana Romero
Nicolas Ballas
Samira Ebrahimi Kahou
Antoine Chassang
C. Gatta
Yoshua Bengio
FedML
319
3,898
0
19 Dec 2014
Generative Adversarial Networks
Ian Goodfellow
Jean Pouget-Abadie
M. Berk Mirza
Bing Xu
David Warde-Farley
Sherjil Ozair
Aaron Courville
Yoshua Bengio
GAN
145
2,196
0
10 Jun 2014
Exploiting Linear Structure Within Convolutional Networks for Efficient Evaluation
Emily L. Denton
Wojciech Zaremba
Joan Bruna
Yann LeCun
Rob Fergus
FAtt
179
1,693
0
02 Apr 2014
Do Deep Nets Really Need to be Deep?
Lei Jimmy Ba
R. Caruana
173
2,119
0
21 Dec 2013
Representation Learning: A Review and New Perspectives
Yoshua Bengio
Aaron Courville
Pascal Vincent
OOD
SSL
278
12,458
0
24 Jun 2012
Previous
1
2
3
4
5
6
7