Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1910.01348
Cited By
On the Efficacy of Knowledge Distillation
3 October 2019
Ligang He
Rui Mao
Re-assign community
ArXiv
PDF
HTML
Papers citing
"On the Efficacy of Knowledge Distillation"
50 / 319 papers shown
Title
Prune Your Model Before Distill It
Jinhyuk Park
Albert No
VLM
46
27
0
30 Sep 2021
Consistency Training of Multi-exit Architectures for Sensor Data
Aaqib Saeed
12
0
0
27 Sep 2021
Partial to Whole Knowledge Distillation: Progressive Distilling Decomposed Knowledge Boosts Student Better
Xuanyang Zhang
Xinming Zhang
Jian Sun
25
1
0
26 Sep 2021
Multi-Scale Aligned Distillation for Low-Resolution Detection
Lu Qi
Jason Kuen
Jiuxiang Gu
Zhe-nan Lin
Yi Wang
Yukang Chen
Yanwei Li
Jiaya Jia
23
52
0
14 Sep 2021
FedZKT: Zero-Shot Knowledge Transfer towards Resource-Constrained Federated Learning with Heterogeneous On-Device Models
Lan Zhang
Dapeng Wu
Xiaoyong Yuan
FedML
38
48
0
08 Sep 2021
A distillation based approach for the diagnosis of diseases
Hmrishav Bandyopadhyay
Shuvayan Ghosh Dastidar
Bisakh Mondal
Biplab Banerjee
N. Das
21
1
0
07 Aug 2021
Semi-Supervising Learning, Transfer Learning, and Knowledge Distillation with SimCLR
Khoi Duc Minh Nguyen
Y. Nguyen
Bao Le
20
5
0
02 Aug 2021
Pseudo-LiDAR Based Road Detection
Libo Sun
Haokui Zhang
Wei Yin
19
17
0
28 Jul 2021
SAGE: A Split-Architecture Methodology for Efficient End-to-End Autonomous Vehicle Control
Arnav V. Malawade
Mohanad Odema
Sebastien Lajeunesse-DeGroot
M. A. Al Faruque
28
20
0
22 Jul 2021
Isotonic Data Augmentation for Knowledge Distillation
Wanyun Cui
Sen Yan
29
7
0
03 Jul 2021
Co-advise: Cross Inductive Bias Distillation
Sucheng Ren
Zhengqi Gao
Tianyu Hua
Zihui Xue
Yonglong Tian
Shengfeng He
Hang Zhao
49
53
0
23 Jun 2021
Teacher's pet: understanding and mitigating biases in distillation
Michal Lukasik
Srinadh Bhojanapalli
A. Menon
Sanjiv Kumar
18
25
0
19 Jun 2021
Energy-efficient Knowledge Distillation for Spiking Neural Networks
Dongjin Lee
Seongsik Park
Jongwan Kim
Wuhyeong Doh
Sungroh Yoon
26
11
0
14 Jun 2021
Does Knowledge Distillation Really Work?
Samuel Stanton
Pavel Izmailov
Polina Kirichenko
Alexander A. Alemi
A. Wilson
FedML
32
215
0
10 Jun 2021
Knowledge distillation: A good teacher is patient and consistent
Lucas Beyer
Xiaohua Zhai
Amelie Royer
L. Markeeva
Rohan Anil
Alexander Kolesnikov
VLM
52
287
0
09 Jun 2021
Joint-DetNAS: Upgrade Your Detector with NAS, Pruning and Dynamic Distillation
Lewei Yao
Renjie Pi
Hang Xu
Wei Zhang
Zhenguo Li
Tong Zhang
89
38
0
27 May 2021
Towards Compact Single Image Super-Resolution via Contrastive Self-distillation
Yanbo Wang
Shaohui Lin
Yanyun Qu
Haiyan Wu
Zhizhong Zhang
Yuan Xie
Angela Yao
SupR
28
53
0
25 May 2021
Comparing Kullback-Leibler Divergence and Mean Squared Error Loss in Knowledge Distillation
Taehyeon Kim
Jaehoon Oh
Nakyil Kim
Sangwook Cho
Se-Young Yun
20
228
0
19 May 2021
Rethinking Ensemble-Distillation for Semantic Segmentation Based Unsupervised Domain Adaptation
Chen-Hao Chao
Bo Wun Cheng
Chun-Yi Lee
32
15
0
29 Apr 2021
Spatio-Temporal Pruning and Quantization for Low-latency Spiking Neural Networks
Sayeed Shafayet Chowdhury
Isha Garg
Kaushik Roy
23
38
0
26 Apr 2021
Knowledge Distillation as Semiparametric Inference
Tri Dao
G. Kamath
Vasilis Syrgkanis
Lester W. Mackey
40
31
0
20 Apr 2021
Thief, Beware of What Get You There: Towards Understanding Model Extraction Attack
Xinyi Zhang
Chengfang Fang
Jie Shi
MIACV
MLAU
SILM
43
15
0
13 Apr 2021
Knowledge Distillation By Sparse Representation Matching
D. Tran
Moncef Gabbouj
Alexandros Iosifidis
29
0
0
31 Mar 2021
Distilling a Powerful Student Model via Online Knowledge Distillation
Shaojie Li
Mingbao Lin
Yan Wang
Yongjian Wu
Yonghong Tian
Ling Shao
Rongrong Ji
FedML
27
47
0
26 Mar 2021
Improved Techniques for Quantizing Deep Networks with Adaptive Bit-Widths
Ximeng Sun
Yikang Shen
Chun-Fu Chen
Naigang Wang
Bowen Pan
Bowen Pan Kailash Gopalakrishnan
A. Oliva
Rogerio Feris
Kate Saenko
MQ
24
4
0
02 Mar 2021
DeepReDuce: ReLU Reduction for Fast Private Inference
N. Jha
Zahra Ghodsi
S. Garg
Brandon Reagen
47
90
0
02 Mar 2021
There is More than Meets the Eye: Self-Supervised Multi-Object Detection and Tracking with Sound by Distilling Multimodal Knowledge
Francisco Rivera Valverde
Juana Valeria Hurtado
Abhinav Valada
26
72
0
01 Mar 2021
Exploring Knowledge Distillation of a Deep Neural Network for Multi-Script identification
Shuvayan Ghosh Dastidar
Kalpita Dutta
N. Das
M. Kundu
M. Nasipuri
9
6
0
20 Feb 2021
Fast End-to-End Speech Recognition via Non-Autoregressive Models and Cross-Modal Knowledge Transferring from BERT
Ye Bai
Jiangyan Yi
J. Tao
Zhengkun Tian
Zhengqi Wen
Shuai Zhang
RALM
33
51
0
15 Feb 2021
Learning Student-Friendly Teacher Networks for Knowledge Distillation
D. Park
Moonsu Cha
C. Jeong
Daesin Kim
Bohyung Han
121
101
0
12 Feb 2021
Collaborative Teacher-Student Learning via Multiple Knowledge Transfer
Liyuan Sun
Jianping Gou
Baosheng Yu
Lan Du
Dacheng Tao
28
11
0
21 Jan 2021
SEED: Self-supervised Distillation For Visual Representation
Zhiyuan Fang
Jianfeng Wang
Lijuan Wang
Lei Zhang
Yezhou Yang
Zicheng Liu
SSL
245
190
0
12 Jan 2021
Training data-efficient image transformers & distillation through attention
Hugo Touvron
Matthieu Cord
Matthijs Douze
Francisco Massa
Alexandre Sablayrolles
Hervé Jégou
ViT
161
6,589
0
23 Dec 2020
Computation-Efficient Knowledge Distillation via Uncertainty-Aware Mixup
Guodong Xu
Ziwei Liu
Chen Change Loy
UQCV
21
39
0
17 Dec 2020
Robustness of Accuracy Metric and its Inspirations in Learning with Noisy Labels
Pengfei Chen
Junjie Ye
Guangyong Chen
Jingwei Zhao
Pheng-Ann Heng
NoLa
103
34
0
08 Dec 2020
Data-Free Model Extraction
Jean-Baptiste Truong
Pratyush Maini
R. Walls
Nicolas Papernot
MIACV
15
181
0
30 Nov 2020
Domain Adaptive Knowledge Distillation for Driving Scene Semantic Segmentation
D. Kothandaraman
Athira M. Nambiar
Anurag Mittal
CLL
23
26
0
03 Nov 2020
Attribution Preservation in Network Compression for Reliable Network Interpretation
Geondo Park
J. Yang
Sung Ju Hwang
Eunho Yang
20
5
0
28 Oct 2020
Knowledge Distillation in Wide Neural Networks: Risk Bound, Data Efficiency and Imperfect Teacher
Guangda Ji
Zhanxing Zhu
59
42
0
20 Oct 2020
Reducing the Teacher-Student Gap via Spherical Knowledge Disitllation
Jia Guo
Minghao Chen
Yao Hu
Chen Zhu
Xiaofei He
Deng Cai
26
6
0
15 Oct 2020
Locally Linear Region Knowledge Distillation
Xiang Deng
Zhongfei Zhang
Zhang
25
0
0
09 Oct 2020
Neighbourhood Distillation: On the benefits of non end-to-end distillation
Laetitia Shao
Max Moroz
Elad Eban
Yair Movshovitz-Attias
ODL
18
0
0
02 Oct 2020
Densely Guided Knowledge Distillation using Multiple Teacher Assistants
Wonchul Son
Jaemin Na
Junyong Choi
Wonjun Hwang
25
111
0
18 Sep 2020
Extending Label Smoothing Regularization with Self-Knowledge Distillation
Jiyue Wang
Pei Zhang
Wenjie Pang
Jie Li
14
0
0
11 Sep 2020
On the Orthogonality of Knowledge Distillation with Other Techniques: From an Ensemble Perspective
Seonguk Park
Kiyoon Yoo
Nojun Kwak
FedML
24
3
0
09 Sep 2020
MetaDistiller: Network Self-Boosting via Meta-Learned Top-Down Distillation
Benlin Liu
Yongming Rao
Jiwen Lu
Jie Zhou
Cho-Jui Hsieh
10
37
0
27 Aug 2020
Prime-Aware Adaptive Distillation
Youcai Zhang
Zhonghao Lan
Yuchen Dai
Fangao Zeng
Yan Bai
Jie Chang
Yichen Wei
21
40
0
04 Aug 2020
Learning with Privileged Information for Efficient Image Super-Resolution
Wonkyung Lee
Junghyup Lee
Dohyung Kim
Bumsub Ham
33
134
0
15 Jul 2020
Tracking-by-Trackers with a Distilled and Reinforced Model
Matteo Dunnhofer
N. Martinel
C. Micheloni
VOT
OffRL
27
4
0
08 Jul 2020
On the Demystification of Knowledge Distillation: A Residual Network Perspective
N. Jha
Rajat Saini
Sparsh Mittal
18
4
0
30 Jun 2020
Previous
1
2
3
4
5
6
7
Next