Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2211.14773
Cited By
Class-aware Information for Logit-based Knowledge Distillation
27 November 2022
Shuoxi Zhang
Hanpeng Liu
John E. Hopcroft
Kun He
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Class-aware Information for Logit-based Knowledge Distillation"
21 / 21 papers shown
Title
Knowledge Distillation with the Reused Teacher Classifier
Defang Chen
Jianhan Mei
Hailin Zhang
C. Wang
Yan Feng
Chun-Yen Chen
85
170
0
26 Mar 2022
Decoupled Knowledge Distillation
Borui Zhao
Quan Cui
Renjie Song
Yiyu Qiu
Jiajun Liang
89
551
0
16 Mar 2022
Comparing Kullback-Leibler Divergence and Mean Squared Error Loss in Knowledge Distillation
Taehyeon Kim
Jaehoon Oh
Nakyil Kim
Sangwook Cho
Se-Young Yun
55
237
0
19 May 2021
Refine Myself by Teaching Myself: Feature Refinement via Self-Knowledge Distillation
Mingi Ji
Seungjae Shin
Seunghyun Hwang
Gibeom Park
Il-Chul Moon
55
124
0
15 Mar 2021
Cross-Layer Distillation with Semantic Calibration
Defang Chen
Jian-Ping Mei
Yuan Zhang
Can Wang
Yan Feng
Chun-Yen Chen
FedML
102
301
0
06 Dec 2020
Contrastive Clustering
Yunfan Li
Peng Hu
Zitao Liu
Dezhong Peng
Qiufeng Wang
Xi Peng
76
633
0
21 Sep 2020
Matching Guided Distillation
Kaiyu Yue
Jiangfan Deng
Feng Zhou
53
49
0
23 Aug 2020
ResNeSt: Split-Attention Networks
Hang Zhang
Chongruo Wu
Zhongyue Zhang
Yi Zhu
Yanghua Peng
...
Tong He
Jonas W. Mueller
R. Manmatha
Mu Li
Alex Smola
126
1,480
0
19 Apr 2020
Preparing Lessons: Improve Knowledge Distillation with Better Supervision
Tiancheng Wen
Shenqi Lai
Xueming Qian
112
70
0
18 Nov 2019
Momentum Contrast for Unsupervised Visual Representation Learning
Kaiming He
Haoqi Fan
Yuxin Wu
Saining Xie
Ross B. Girshick
SSL
220
12,146
0
13 Nov 2019
Contrastive Representation Distillation
Yonglong Tian
Dilip Krishnan
Phillip Isola
184
1,056
0
23 Oct 2019
Similarity-Preserving Knowledge Distillation
Frederick Tung
Greg Mori
128
983
0
23 Jul 2019
Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distillation
Linfeng Zhang
Jiebo Song
Anni Gao
Jingwei Chen
Chenglong Bao
Kaisheng Ma
FedML
79
865
0
17 May 2019
Correlation Congruence for Knowledge Distillation
Baoyun Peng
Xiao Jin
Jiaheng Liu
Shunfeng Zhou
Yichao Wu
Yu Liu
Dongsheng Li
Zhaoning Zhang
100
513
0
03 Apr 2019
Structured Knowledge Distillation for Dense Prediction
Yifan Liu
Chris Liu
Jingdong Wang
Zhenbo Luo
107
585
0
11 Mar 2019
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
Andrew G. Howard
Menglong Zhu
Bo Chen
Dmitry Kalenichenko
Weijun Wang
Tobias Weyand
M. Andreetto
Hartwig Adam
3DH
1.2K
20,918
0
17 Apr 2017
Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer
Sergey Zagoruyko
N. Komodakis
147
2,590
0
12 Dec 2016
Aggregated Residual Transformations for Deep Neural Networks
Saining Xie
Ross B. Girshick
Piotr Dollár
Zhuowen Tu
Kaiming He
527
10,358
0
16 Nov 2016
Wide Residual Networks
Sergey Zagoruyko
N. Komodakis
362
8,005
0
23 May 2016
FitNets: Hints for Thin Deep Nets
Adriana Romero
Nicolas Ballas
Samira Ebrahimi Kahou
Antoine Chassang
C. Gatta
Yoshua Bengio
FedML
332
3,906
0
19 Dec 2014
ImageNet Large Scale Visual Recognition Challenge
Olga Russakovsky
Jia Deng
Hao Su
J. Krause
S. Satheesh
...
A. Karpathy
A. Khosla
Michael S. Bernstein
Alexander C. Berg
Li Fei-Fei
VLM
ObjD
1.7K
39,637
0
01 Sep 2014
1