Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1910.01348
Cited By
On the Efficacy of Knowledge Distillation
3 October 2019
Ligang He
Rui Mao
Re-assign community
ArXiv
PDF
HTML
Papers citing
"On the Efficacy of Knowledge Distillation"
19 / 319 papers shown
Title
Fast, Accurate, and Simple Models for Tabular Data via Augmented Distillation
Rasool Fakoor
Jonas W. Mueller
Nick Erickson
Pratik Chaudhari
Alex Smola
26
54
0
25 Jun 2020
Paying more attention to snapshots of Iterative Pruning: Improving Model Compression via Ensemble Distillation
Duong H. Le
Vo Trung Nhan
N. Thoai
VLM
25
7
0
20 Jun 2020
Knowledge Distillation: A Survey
Jianping Gou
B. Yu
Stephen J. Maybank
Dacheng Tao
VLM
23
2,851
0
09 Jun 2020
Self-Distillation as Instance-Specific Label Smoothing
Zhilu Zhang
M. Sabuncu
20
116
0
09 Jun 2020
ResKD: Residual-Guided Knowledge Distillation
Xuewei Li
Songyuan Li
Bourahla Omar
Fei Wu
Xi Li
21
47
0
08 Jun 2020
An Empirical Analysis of the Impact of Data Augmentation on Knowledge Distillation
Deepan Das
Haley Massa
Abhimanyu Kulkarni
Theodoros Rekatsinas
29
18
0
06 Jun 2020
An Overview of Neural Network Compression
James OÑeill
AI4CE
45
98
0
05 Jun 2020
Channel Distillation: Channel-Wise Attention for Knowledge Distillation
Zaida Zhou
Chaoran Zhuge
Xinwei Guan
Wen Liu
16
49
0
02 Jun 2020
Sub-Band Knowledge Distillation Framework for Speech Enhancement
Xiang Hao
Shi-Xue Wen
Xiangdong Su
Yun Liu
Guanglai Gao
Xiaofei Li
27
18
0
29 May 2020
Language Model Prior for Low-Resource Neural Machine Translation
Christos Baziotis
Barry Haddow
Alexandra Birch
18
53
0
30 Apr 2020
Filter Grafting for Deep Neural Networks: Reason, Method, and Cultivation
Hao Cheng
Fanxu Meng
Ke Li
Huixiang Luo
Guangming Lu
Xing Sun
Feiyue Huang
18
0
0
26 Apr 2020
Neural Networks Are More Productive Teachers Than Human Raters: Active Mixup for Data-Efficient Knowledge Distillation from a Blackbox Model
Dongdong Wang
Yandong Li
Liqiang Wang
Boqing Gong
24
48
0
31 Mar 2020
A Survey of Methods for Low-Power Deep Learning and Computer Vision
Abhinav Goel
Caleb Tung
Yung-Hsiang Lu
George K. Thiruvathukal
VLM
15
92
0
24 Mar 2020
Knowledge distillation via adaptive instance normalization
Jing Yang
Brais Martínez
Adrian Bulat
Georgios Tzimiropoulos
21
23
0
09 Mar 2020
Pacemaker: Intermediate Teacher Knowledge Distillation For On-The-Fly Convolutional Neural Network
Wonchul Son
Youngbin Kim
Wonseok Song
Youngsuk Moon
Wonjun Hwang
14
0
0
09 Mar 2020
Train Large, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers
Zhuohan Li
Eric Wallace
Sheng Shen
Kevin Lin
Kurt Keutzer
Dan Klein
Joseph E. Gonzalez
22
148
0
26 Feb 2020
QUEST: Quantized embedding space for transferring knowledge
Himalaya Jain
Spyros Gidaris
N. Komodakis
P. Pérez
Matthieu Cord
21
14
0
03 Dec 2019
Knowledge Transfer Graph for Deep Collaborative Learning
Soma Minami
Tsubasa Hirakawa
Takayoshi Yamashita
H. Fujiyoshi
30
9
0
10 Sep 2019
Knowledge Distillation by On-the-Fly Native Ensemble
Xu Lan
Xiatian Zhu
S. Gong
212
474
0
12 Jun 2018
Previous
1
2
3
4
5
6
7