ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1910.01348
  4. Cited By
On the Efficacy of Knowledge Distillation

On the Efficacy of Knowledge Distillation

3 October 2019
Ligang He
Rui Mao
ArXivPDFHTML

Papers citing "On the Efficacy of Knowledge Distillation"

50 / 319 papers shown
Title
ORC: Network Group-based Knowledge Distillation using Online Role Change
ORC: Network Group-based Knowledge Distillation using Online Role Change
Jun-woo Choi
Hyeon Cho
Seockhwa Jeong
Wonjun Hwang
27
3
0
01 Jun 2022
What Knowledge Gets Distilled in Knowledge Distillation?
What Knowledge Gets Distilled in Knowledge Distillation?
Utkarsh Ojha
Yuheng Li
Anirudh Sundara Rajan
Yingyu Liang
Yong Jae Lee
FedML
29
18
0
31 May 2022
Parameter-Efficient and Student-Friendly Knowledge Distillation
Parameter-Efficient and Student-Friendly Knowledge Distillation
Jun Rao
Xv Meng
Liang Ding
Shuhan Qi
Dacheng Tao
37
46
0
28 May 2022
A Closer Look at Self-Supervised Lightweight Vision Transformers
A Closer Look at Self-Supervised Lightweight Vision Transformers
Shaoru Wang
Jin Gao
Zeming Li
Jian Sun
Weiming Hu
ViT
73
42
0
28 May 2022
Heterogeneous Collaborative Learning for Personalized Healthcare
  Analytics via Messenger Distillation
Heterogeneous Collaborative Learning for Personalized Healthcare Analytics via Messenger Distillation
Guanhua Ye
Tong Chen
Yawen Li
Li-zhen Cui
Quoc Viet Hung Nguyen
Hongzhi Yin
38
7
0
27 May 2022
Knowledge Distillation from A Stronger Teacher
Knowledge Distillation from A Stronger Teacher
Tao Huang
Shan You
Fei Wang
Chao Qian
Chang Xu
32
237
0
21 May 2022
Knowledge Distillation Meets Open-Set Semi-Supervised Learning
Knowledge Distillation Meets Open-Set Semi-Supervised Learning
Jing Yang
Xiatian Zhu
Adrian Bulat
Brais Martínez
Georgios Tzimiropoulos
37
8
0
13 May 2022
Spot-adaptive Knowledge Distillation
Spot-adaptive Knowledge Distillation
Mingli Song
Ying Chen
Jingwen Ye
Mingli Song
25
72
0
05 May 2022
Generalized Knowledge Distillation via Relationship Matching
Generalized Knowledge Distillation via Relationship Matching
Han-Jia Ye
Su Lu
De-Chuan Zhan
FedML
22
20
0
04 May 2022
Masked Generative Distillation
Masked Generative Distillation
Zhendong Yang
Zhe Li
Mingqi Shao
Dachuan Shi
Zehuan Yuan
Chun Yuan
FedML
38
169
0
03 May 2022
CNLL: A Semi-supervised Approach For Continual Noisy Label Learning
CNLL: A Semi-supervised Approach For Continual Noisy Label Learning
Nazmul Karim
Umar Khalid
Ashkan Esmaeili
Nazanin Rahnavard
NoLa
CLL
38
15
0
21 Apr 2022
SF-PATE: Scalable, Fair, and Private Aggregation of Teacher Ensembles
SF-PATE: Scalable, Fair, and Private Aggregation of Teacher Ensembles
Cuong Tran
Keyu Zhu
Ferdinando Fioretto
Pascal Van Hentenryck
32
11
0
11 Apr 2022
Unified and Effective Ensemble Knowledge Distillation
Unified and Effective Ensemble Knowledge Distillation
Chuhan Wu
Fangzhao Wu
Tao Qi
Yongfeng Huang
FedML
27
10
0
01 Apr 2022
It's All In the Teacher: Zero-Shot Quantization Brought Closer to the
  Teacher
It's All In the Teacher: Zero-Shot Quantization Brought Closer to the Teacher
Kanghyun Choi
Hye Yoon Lee
Deokki Hong
Joonsang Yu
Noseong Park
Youngsok Kim
Jinho Lee
MQ
38
31
0
31 Mar 2022
Investigating Top-$k$ White-Box and Transferable Black-box Attack
Investigating Top-kkk White-Box and Transferable Black-box Attack
Chaoning Zhang
Philipp Benz
Adil Karjauv
Jae-Won Cho
Kang Zhang
In So Kweon
33
43
0
30 Mar 2022
PCA-Based Knowledge Distillation Towards Lightweight and Content-Style
  Balanced Photorealistic Style Transfer Models
PCA-Based Knowledge Distillation Towards Lightweight and Content-Style Balanced Photorealistic Style Transfer Models
Tai-Yin Chiu
Danna Gurari
23
19
0
25 Mar 2022
Node Representation Learning in Graph via Node-to-Neighbourhood Mutual
  Information Maximization
Node Representation Learning in Graph via Node-to-Neighbourhood Mutual Information Maximization
Wei Dong
Junsheng Wu
Yi-wei Luo
Zongyuan Ge
Peifeng Wang
SSL
39
19
0
23 Mar 2022
A Closer Look at Knowledge Distillation with Features, Logits, and
  Gradients
A Closer Look at Knowledge Distillation with Features, Logits, and Gradients
Yen-Chang Hsu
James Smith
Yilin Shen
Z. Kira
Hongxia Jin
27
7
0
18 Mar 2022
When Chosen Wisely, More Data Is What You Need: A Universal
  Sample-Efficient Strategy For Data Augmentation
When Chosen Wisely, More Data Is What You Need: A Universal Sample-Efficient Strategy For Data Augmentation
Ehsan Kamalloo
Mehdi Rezagholizadeh
A. Ghodsi
28
9
0
17 Mar 2022
Decoupled Knowledge Distillation
Decoupled Knowledge Distillation
Borui Zhao
Quan Cui
Renjie Song
Yiyu Qiu
Jiajun Liang
17
526
0
16 Mar 2022
On the benefits of knowledge distillation for adversarial robustness
On the benefits of knowledge distillation for adversarial robustness
Javier Maroto
Guillermo Ortiz-Jiménez
P. Frossard
AAML
FedML
25
20
0
14 Mar 2022
PyNET-QxQ: An Efficient PyNET Variant for QxQ Bayer Pattern Demosaicing
  in CMOS Image Sensors
PyNET-QxQ: An Efficient PyNET Variant for QxQ Bayer Pattern Demosaicing in CMOS Image Sensors
Minhyeok Cho
Haechang Lee
Hyunwoo Je
Kijeong Kim
Dongil Ryu
Albert No
33
3
0
08 Mar 2022
Better Supervisory Signals by Observing Learning Paths
Better Supervisory Signals by Observing Learning Paths
Yi Ren
Shangmin Guo
Danica J. Sutherland
33
21
0
04 Mar 2022
Meta Knowledge Distillation
Meta Knowledge Distillation
Jihao Liu
Boxiao Liu
Hongsheng Li
Yu Liu
18
25
0
16 Feb 2022
Exploring Inter-Channel Correlation for Diversity-preserved
  KnowledgeDistillation
Exploring Inter-Channel Correlation for Diversity-preserved KnowledgeDistillation
Li Liu
Qingle Huang
Sihao Lin
Hongwei Xie
Bing Wang
Xiaojun Chang
Xiao-Xue Liang
28
100
0
08 Feb 2022
Adaptive Mixing of Auxiliary Losses in Supervised Learning
Adaptive Mixing of Auxiliary Losses in Supervised Learning
D. Sivasubramanian
Ayush Maheshwari
Pradeep Shenoy
A. Prathosh
Ganesh Ramakrishnan
29
5
0
07 Feb 2022
A Novel Incremental Learning Driven Instance Segmentation Framework to
  Recognize Highly Cluttered Instances of the Contraband Items
A Novel Incremental Learning Driven Instance Segmentation Framework to Recognize Highly Cluttered Instances of the Contraband Items
Taimur Hassan
S. Akçay
Bennamoun
Salman Khan
Naoufel Werghi
35
23
0
07 Jan 2022
Cross-Modality Deep Feature Learning for Brain Tumor Segmentation
Cross-Modality Deep Feature Learning for Brain Tumor Segmentation
Dingwen Zhang
Guohai Huang
Qiang Zhang
Jungong Han
Junwei Han
Yizhou Yu
25
217
0
07 Jan 2022
Role of Data Augmentation Strategies in Knowledge Distillation for
  Wearable Sensor Data
Role of Data Augmentation Strategies in Knowledge Distillation for Wearable Sensor Data
Eunyeong Jeon
Anirudh Som
Ankita Shukla
Kristina Hasanaj
M. Buman
Pavan Turaga
40
11
0
01 Jan 2022
ERNIE 3.0 Titan: Exploring Larger-scale Knowledge Enhanced Pre-training
  for Language Understanding and Generation
ERNIE 3.0 Titan: Exploring Larger-scale Knowledge Enhanced Pre-training for Language Understanding and Generation
Shuohuan Wang
Yu Sun
Yang Xiang
Zhihua Wu
Siyu Ding
...
Tian Wu
Wei Zeng
Ge Li
Wen Gao
Haifeng Wang
ELM
39
79
0
23 Dec 2021
Anomaly Discovery in Semantic Segmentation via Distillation Comparison
  Networks
Anomaly Discovery in Semantic Segmentation via Distillation Comparison Networks
Huan Zhou
Shi Gong
Yu Zhou
Zengqiang Zheng
Ronghua Liu
Xiang Bai
21
1
0
18 Dec 2021
Pixel Distillation: A New Knowledge Distillation Scheme for
  Low-Resolution Image Recognition
Pixel Distillation: A New Knowledge Distillation Scheme for Low-Resolution Image Recognition
Guangyu Guo
Dingwen Zhang
Longfei Han
Nian Liu
Ming-Ming Cheng
Junwei Han
29
2
0
17 Dec 2021
Finding Deviated Behaviors of the Compressed DNN Models for Image
  Classifications
Finding Deviated Behaviors of the Compressed DNN Models for Image Classifications
Yongqiang Tian
Wuqi Zhang
Ming Wen
Shing-Chi Cheung
Chengnian Sun
Shiqing Ma
Yu Jiang
31
7
0
06 Dec 2021
KDCTime: Knowledge Distillation with Calibration on InceptionTime for
  Time-series Classification
KDCTime: Knowledge Distillation with Calibration on InceptionTime for Time-series Classification
Xueyuan Gong
Yain-Whar Si
Yongqi Tian
Cong Lin
Xinyuan Zhang
Xiaoxiang Liu
41
6
0
04 Dec 2021
Improved Knowledge Distillation via Adversarial Collaboration
Improved Knowledge Distillation via Adversarial Collaboration
Zhiqiang Liu
Chengkai Huang
Yanxia Liu
31
2
0
29 Nov 2021
Altering Backward Pass Gradients improves Convergence
Altering Backward Pass Gradients improves Convergence
Bishshoy Das
M. Mondal
Brejesh Lall
S. Joshi
Sumantra Dutta Roy
20
0
0
24 Nov 2021
Learning Interpretation with Explainable Knowledge Distillation
Learning Interpretation with Explainable Knowledge Distillation
Raed Alharbi
Minh Nhat Vu
My T. Thai
14
15
0
12 Nov 2021
Meta-Teacher For Face Anti-Spoofing
Meta-Teacher For Face Anti-Spoofing
Yunxiao Qin
Zitong Yu
Longbin Yan
Zezheng Wang
Chenxu Zhao
Zhen Lei
CVBM
25
61
0
12 Nov 2021
Edge-Cloud Polarization and Collaboration: A Comprehensive Survey for AI
Edge-Cloud Polarization and Collaboration: A Comprehensive Survey for AI
Jiangchao Yao
Shengyu Zhang
Yang Yao
Feng Wang
Jianxin Ma
...
Kun Kuang
Chao-Xiang Wu
Fei Wu
Jingren Zhou
Hongxia Yang
24
91
0
11 Nov 2021
Oracle Teacher: Leveraging Target Information for Better Knowledge
  Distillation of CTC Models
Oracle Teacher: Leveraging Target Information for Better Knowledge Distillation of CTC Models
J. Yoon
H. Kim
Hyeon Seung Lee
Sunghwan Ahn
N. Kim
38
1
0
05 Nov 2021
Low-Rank+Sparse Tensor Compression for Neural Networks
Low-Rank+Sparse Tensor Compression for Neural Networks
Cole Hawkins
Haichuan Yang
Meng Li
Liangzhen Lai
Vikas Chandra
21
3
0
02 Nov 2021
Arch-Net: Model Distillation for Architecture Agnostic Model Deployment
Arch-Net: Model Distillation for Architecture Agnostic Model Deployment
Weixin Xu
Zipeng Feng
Shuangkang Fang
Song Yuan
Yi Yang
Shuchang Zhou
MQ
30
1
0
01 Nov 2021
Rethinking the Knowledge Distillation From the Perspective of Model
  Calibration
Rethinking the Knowledge Distillation From the Perspective of Model Calibration
Lehan Yang
Jincen Song
21
2
0
31 Oct 2021
Diversity Matters When Learning From Ensembles
Diversity Matters When Learning From Ensembles
G. Nam
Jongmin Yoon
Yoonho Lee
Juho Lee
UQCV
FedML
VLM
43
36
0
27 Oct 2021
When in Doubt, Summon the Titans: Efficient Inference with Large Models
When in Doubt, Summon the Titans: Efficient Inference with Large Models
A. S. Rawat
Manzil Zaheer
A. Menon
Amr Ahmed
Sanjiv Kumar
25
7
0
19 Oct 2021
A Dimensionality Reduction Approach for Convolutional Neural Networks
A Dimensionality Reduction Approach for Convolutional Neural Networks
L. Meneghetti
N. Demo
G. Rozza
93
14
0
18 Oct 2021
Network Augmentation for Tiny Deep Learning
Network Augmentation for Tiny Deep Learning
Han Cai
Chuang Gan
Ji Lin
Song Han
25
29
0
17 Oct 2021
Pro-KD: Progressive Distillation by Following the Footsteps of the
  Teacher
Pro-KD: Progressive Distillation by Following the Footsteps of the Teacher
Mehdi Rezagholizadeh
A. Jafari
Puneeth Salad
Pranav Sharma
Ali Saheb Pasand
A. Ghodsi
81
18
0
16 Oct 2021
Scalable Consistency Training for Graph Neural Networks via
  Self-Ensemble Self-Distillation
Scalable Consistency Training for Graph Neural Networks via Self-Ensemble Self-Distillation
Cole Hawkins
V. Ioannidis
Soji Adeshina
George Karypis
GNN
SSL
31
2
0
12 Oct 2021
Light-weight Deformable Registration using Adversarial Learning with
  Distilling Knowledge
Light-weight Deformable Registration using Adversarial Learning with Distilling Knowledge
M. Tran
Tuong Khanh Long Do
Huy Tran
Erman Tjiputra
Quang-Dieu Tran
Anh Nguyen
MedIm
32
25
0
04 Oct 2021
Previous
1234567
Next