ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1910.01348
  4. Cited By
On the Efficacy of Knowledge Distillation

On the Efficacy of Knowledge Distillation

3 October 2019
Ligang He
Rui Mao
ArXivPDFHTML

Papers citing "On the Efficacy of Knowledge Distillation"

50 / 319 papers shown
Title
Boosting Residual Networks with Group Knowledge
Boosting Residual Networks with Group Knowledge
Shengji Tang
Peng Ye
Baopu Li
Wei Lin
Tao Chen
Tong He
Chong Yu
Wanli Ouyang
46
5
0
26 Aug 2023
Fall Detection using Knowledge Distillation Based Long short-term memory
  for Offline Embedded and Low Power Devices
Fall Detection using Knowledge Distillation Based Long short-term memory for Offline Embedded and Low Power Devices
Hannah Zhou
Allison Chen
Celine Buer
Emily Chen
Kayleen Tang
Lauryn Gong
Zhiqi Liu
Jianbin Tang
16
1
0
24 Aug 2023
Omnidirectional Information Gathering for Knowledge Transfer-based
  Audio-Visual Navigation
Omnidirectional Information Gathering for Knowledge Transfer-based Audio-Visual Navigation
Jinyu Chen
Wenguan Wang
Siying Liu
Hongsheng Li
Yi Yang
20
8
0
20 Aug 2023
Unlimited Knowledge Distillation for Action Recognition in the Dark
Unlimited Knowledge Distillation for Action Recognition in the Dark
Ruibing Jin
Guosheng Lin
Min-man Wu
Jie Lin
Zhengguo Li
Xiaoli Li
Zhenghua Chen
16
1
0
18 Aug 2023
Learning Lightweight Object Detectors via Multi-Teacher Progressive
  Distillation
Learning Lightweight Object Detectors via Multi-Teacher Progressive Distillation
Shengcao Cao
Mengtian Li
James Hays
Deva Ramanan
Yu-xiong Wang
Liangyan Gui
VLM
26
11
0
17 Aug 2023
AICSD: Adaptive Inter-Class Similarity Distillation for Semantic
  Segmentation
AICSD: Adaptive Inter-Class Similarity Distillation for Semantic Segmentation
Amir M. Mansourian
Rozhan Ahmadi
S. Kasaei
44
2
0
08 Aug 2023
NormKD: Normalized Logits for Knowledge Distillation
NormKD: Normalized Logits for Knowledge Distillation
Zhihao Chi
Tu Zheng
Hengjia Li
Zheng Yang
Boxi Wu
Binbin Lin
D. Cai
32
13
0
01 Aug 2023
Cumulative Spatial Knowledge Distillation for Vision Transformers
Cumulative Spatial Knowledge Distillation for Vision Transformers
Borui Zhao
Renjie Song
Jiajun Liang
34
14
0
17 Jul 2023
DOT: A Distillation-Oriented Trainer
DOT: A Distillation-Oriented Trainer
Borui Zhao
Quan Cui
Renjie Song
Jiajun Liang
27
6
0
17 Jul 2023
Multimodal Distillation for Egocentric Action Recognition
Multimodal Distillation for Egocentric Action Recognition
Gorjan Radevski
Dusan Grujicic
Marie-Francine Moens
Matthew Blaschko
Tinne Tuytelaars
EgoV
30
23
0
14 Jul 2023
Physical-aware Cross-modal Adversarial Network for Wearable Sensor-based Human Action Recognition
Jianyuan Ni
Hao Tang
A. Ngu
Gaowen Liu
Yan Yan
35
3
0
07 Jul 2023
Review helps learn better: Temporal Supervised Knowledge Distillation
Review helps learn better: Temporal Supervised Knowledge Distillation
Dongwei Wang
Zhi Han
Yanmei Wang
Xi’ai Chen
Baichen Liu
Yandong Tang
60
1
0
03 Jul 2023
Streaming egocentric action anticipation: An evaluation scheme and
  approach
Streaming egocentric action anticipation: An evaluation scheme and approach
Antonino Furnari
G. Farinella
EgoV
24
3
0
29 Jun 2023
Accelerating Molecular Graph Neural Networks via Knowledge Distillation
Accelerating Molecular Graph Neural Networks via Knowledge Distillation
Filip Ekstrom Kelvinius
D. Georgiev
Artur P. Toshev
Johannes Gasteiger
36
7
0
26 Jun 2023
H$_2$O: Heavy-Hitter Oracle for Efficient Generative Inference of Large
  Language Models
H2_22​O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models
Zhenyu Zhang
Ying Sheng
Dinesh Manocha
Tianlong Chen
Lianmin Zheng
...
Yuandong Tian
Christopher Ré
Clark W. Barrett
Zhangyang Wang
Beidi Chen
VLM
63
255
0
24 Jun 2023
CrossKD: Cross-Head Knowledge Distillation for Object Detection
CrossKD: Cross-Head Knowledge Distillation for Object Detection
Jiabao Wang
Yuming Chen
Zhaohui Zheng
Xiang Li
Ming-Ming Cheng
Qibin Hou
46
33
0
20 Jun 2023
Categories of Response-Based, Feature-Based, and Relation-Based
  Knowledge Distillation
Categories of Response-Based, Feature-Based, and Relation-Based Knowledge Distillation
Chuanguang Yang
Xinqiang Yu
Zhulin An
Yongjun Xu
VLM
OffRL
88
22
0
19 Jun 2023
On the Amplification of Linguistic Bias through Unintentional
  Self-reinforcement Learning by Generative Language Models -- A Perspective
On the Amplification of Linguistic Bias through Unintentional Self-reinforcement Learning by Generative Language Models -- A Perspective
Minhyeok Lee
11
1
0
12 Jun 2023
UADB: Unsupervised Anomaly Detection Booster
UADB: Unsupervised Anomaly Detection Booster
Hangting Ye
Zhining Liu
Xinyi Shen
Wei Cao
Shun Zheng
Xiaofan Gui
Huishuai Zhang
Yi Chang
Jiang Bian
37
2
0
03 Jun 2023
Hypothesis Transfer Learning with Surrogate Classification Losses:
  Generalization Bounds through Algorithmic Stability
Hypothesis Transfer Learning with Surrogate Classification Losses: Generalization Bounds through Algorithmic Stability
Anass Aghbalou
Guillaume Staerman
22
4
0
31 May 2023
Are Large Kernels Better Teachers than Transformers for ConvNets?
Are Large Kernels Better Teachers than Transformers for ConvNets?
Tianjin Huang
Lu Yin
Zhenyu Zhang
Lijuan Shen
Meng Fang
Mykola Pechenizkiy
Zhangyang Wang
Shiwei Liu
38
13
0
30 May 2023
Triplet Knowledge Distillation
Triplet Knowledge Distillation
Xijun Wang
Dongyang Liu
Meina Kan
Chunrui Han
Zhongqin Wu
Shiguang Shan
37
3
0
25 May 2023
VanillaKD: Revisit the Power of Vanilla Knowledge Distillation from
  Small Scale to Large Scale
VanillaKD: Revisit the Power of Vanilla Knowledge Distillation from Small Scale to Large Scale
Zhiwei Hao
Jianyuan Guo
Kai Han
Han Hu
Chang Xu
Yunhe Wang
38
16
0
25 May 2023
On the Impact of Knowledge Distillation for Model Interpretability
On the Impact of Knowledge Distillation for Model Interpretability
Hyeongrok Han
Siwon Kim
Hyun-Soo Choi
Sungroh Yoon
24
4
0
25 May 2023
Combining Multi-Objective Bayesian Optimization with Reinforcement Learning for TinyML
Combining Multi-Objective Bayesian Optimization with Reinforcement Learning for TinyML
M. Deutel
G. Kontes
Christopher Mutschler
Jürgen Teich
55
0
0
23 May 2023
Decoupled Kullback-Leibler Divergence Loss
Decoupled Kullback-Leibler Divergence Loss
Jiequan Cui
Zhuotao Tian
Zhisheng Zhong
Xiaojuan Qi
Bei Yu
Hanwang Zhang
39
38
0
23 May 2023
Lifting the Curse of Capacity Gap in Distilling Language Models
Lifting the Curse of Capacity Gap in Distilling Language Models
Chen Zhang
Yang Yang
Jiahao Liu
Jingang Wang
Yunsen Xian
Benyou Wang
Dawei Song
MoE
32
19
0
20 May 2023
Student-friendly Knowledge Distillation
Student-friendly Knowledge Distillation
Mengyang Yuan
Bo Lang
Fengnan Quan
20
17
0
18 May 2023
Tailoring Instructions to Student's Learning Levels Boosts Knowledge
  Distillation
Tailoring Instructions to Student's Learning Levels Boosts Knowledge Distillation
Yuxin Ren
Zi-Qi Zhong
Xingjian Shi
Yi Zhu
Chun Yuan
Mu Li
27
7
0
16 May 2023
Towards Understanding and Improving Knowledge Distillation for Neural
  Machine Translation
Towards Understanding and Improving Knowledge Distillation for Neural Machine Translation
Songming Zhang
Yunlong Liang
Shuaibo Wang
Wenjuan Han
Jian Liu
Jinan Xu
Jinan Xu
23
8
0
14 May 2023
DynamicKD: An Effective Knowledge Distillation via Dynamic Entropy
  Correction-Based Distillation for Gap Optimizing
DynamicKD: An Effective Knowledge Distillation via Dynamic Entropy Correction-Based Distillation for Gap Optimizing
Songling Zhu
Ronghua Shang
Bo Yuan
Weitong Zhang
Yangyang Li
Licheng Jiao
35
7
0
09 May 2023
DualCross: Cross-Modality Cross-Domain Adaptation for Monocular BEV
  Perception
DualCross: Cross-Modality Cross-Domain Adaptation for Monocular BEV Perception
Yunze Man
Liangyan Gui
Yu-xiong Wang
33
5
0
05 May 2023
Class Attention Transfer Based Knowledge Distillation
Class Attention Transfer Based Knowledge Distillation
Ziyao Guo
Haonan Yan
Hui Li
Xiao-La Lin
18
63
0
25 Apr 2023
Distilling from Similar Tasks for Transfer Learning on a Budget
Distilling from Similar Tasks for Transfer Learning on a Budget
Kenneth Borup
Cheng Perng Phoo
Bharath Hariharan
30
2
0
24 Apr 2023
Improving Knowledge Distillation via Transferring Learning Ability
Improving Knowledge Distillation via Transferring Learning Ability
Long Liu
Tong Li
Hui Cheng
13
1
0
24 Apr 2023
LiDAR2Map: In Defense of LiDAR-Based Semantic Map Construction Using
  Online Camera Distillation
LiDAR2Map: In Defense of LiDAR-Based Semantic Map Construction Using Online Camera Distillation
Song Wang
Wentong Li
Wenyu Liu
Xiaolu Liu
Jianke Zhu
43
17
0
22 Apr 2023
FIANCEE: Faster Inference of Adversarial Networks via Conditional Early
  Exits
FIANCEE: Faster Inference of Adversarial Networks via Conditional Early Exits
Polina Karpikova
Radionova Ekaterina
A. Yaschenko
Andrei A. Spiridonov
Leonid Kostyushko
Riccardo Fabbricatore
Aleksei Ivakhnenko Samsung AI Center
25
3
0
20 Apr 2023
Constructing Deep Spiking Neural Networks from Artificial Neural
  Networks with Knowledge Distillation
Constructing Deep Spiking Neural Networks from Artificial Neural Networks with Knowledge Distillation
Qi Xu
Yaxin Li
Jiangrong Shen
Jian K. Liu
Huajin Tang
Gang Pan
27
62
0
12 Apr 2023
A Survey on Recent Teacher-student Learning Studies
A Survey on Recent Teacher-student Learning Studies
Min Gao
28
3
0
10 Apr 2023
Long-Tailed Visual Recognition via Self-Heterogeneous Integration with
  Knowledge Excavation
Long-Tailed Visual Recognition via Self-Heterogeneous Integration with Knowledge Excavation
Yang Jin
Mengke Li
Yang Lu
Y. Cheung
Hanzi Wang
43
21
0
03 Apr 2023
Head3D: Complete 3D Head Generation via Tri-plane Feature Distillation
Head3D: Complete 3D Head Generation via Tri-plane Feature Distillation
Y. Cheng
Yichao Yan
Wenhan Zhu
Ye Pan
Bowen Pan
Xiaokang Yang
3DH
37
3
0
28 Mar 2023
HOICLIP: Efficient Knowledge Transfer for HOI Detection with
  Vision-Language Models
HOICLIP: Efficient Knowledge Transfer for HOI Detection with Vision-Language Models
Sha Ning
Longtian Qiu
Yongfei Liu
Xuming He
VLM
35
42
0
28 Mar 2023
DisWOT: Student Architecture Search for Distillation WithOut Training
DisWOT: Student Architecture Search for Distillation WithOut Training
Peijie Dong
Lujun Li
Zimian Wei
46
57
0
28 Mar 2023
Channel-Aware Distillation Transformer for Depth Estimation on Nano
  Drones
Channel-Aware Distillation Transformer for Depth Estimation on Nano Drones
Ning Zhang
F. Nex
G. Vosselman
N. Kerle
34
1
0
18 Mar 2023
Towards a Smaller Student: Capacity Dynamic Distillation for Efficient
  Image Retrieval
Towards a Smaller Student: Capacity Dynamic Distillation for Efficient Image Retrieval
Yi Xie
Huaidong Zhang
Xuemiao Xu
Jianqing Zhu
Shengfeng He
VLM
21
13
0
16 Mar 2023
Reinforce Data, Multiply Impact: Improved Model Accuracy and Robustness
  with Dataset Reinforcement
Reinforce Data, Multiply Impact: Improved Model Accuracy and Robustness with Dataset Reinforcement
Fartash Faghri
Hadi Pouransari
Sachin Mehta
Mehrdad Farajtabar
Ali Farhadi
Mohammad Rastegari
Oncel Tuzel
43
9
0
15 Mar 2023
Knowledge Distillation from Single to Multi Labels: an Empirical Study
Knowledge Distillation from Single to Multi Labels: an Empirical Study
Youcai Zhang
Yuzhuo Qin
Heng-Ye Liu
Yanhao Zhang
Yaqian Li
X. Gu
VLM
55
2
0
15 Mar 2023
MetaMixer: A Regularization Strategy for Online Knowledge Distillation
MetaMixer: A Regularization Strategy for Online Knowledge Distillation
Maorong Wang
L. Xiao
T. Yamasaki
KELM
MoE
32
1
0
14 Mar 2023
Leveraging Angular Distributions for Improved Knowledge Distillation
Leveraging Angular Distributions for Improved Knowledge Distillation
Eunyeong Jeon
Hongjun Choi
Ankita Shukla
Pavan Turaga
11
8
0
27 Feb 2023
Progressive Ensemble Distillation: Building Ensembles for Efficient
  Inference
Progressive Ensemble Distillation: Building Ensembles for Efficient Inference
D. Dennis
Abhishek Shetty
A. Sevekari
K. Koishida
Virginia Smith
FedML
34
0
0
20 Feb 2023
Previous
1234567
Next