Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2210.12787
Cited By
Respecting Transfer Gap in Knowledge Distillation
23 October 2022
Yulei Niu
Long Chen
Chan Zhou
Hanwang Zhang
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Respecting Transfer Gap in Knowledge Distillation"
50 / 62 papers shown
Title
CKD: Contrastive Knowledge Distillation from A Sample-wise Perspective
Wencheng Zhu
Xin Zhou
Pengfei Zhu
Yu Wang
Qinghua Hu
VLM
99
1
0
22 Apr 2024
Introspective Distillation for Robust Question Answering
Yulei Niu
Hanwang Zhang
66
59
0
01 Nov 2021
Self-Supervised Learning Disentangled Group Representation as Feature
Tan Wang
Zhongqi Yue
Jianqiang Huang
Qianru Sun
Hanwang Zhang
OOD
59
69
0
28 Oct 2021
Distilling Knowledge via Knowledge Review
Pengguang Chen
Shu Liu
Hengshuang Zhao
Jiaya Jia
181
437
0
19 Apr 2021
Is Label Smoothing Truly Incompatible with Knowledge Distillation: An Empirical Study
Zhiqiang Shen
Zechun Liu
Dejia Xu
Zitian Chen
Kwang-Ting Cheng
Marios Savvides
41
76
0
01 Apr 2021
Refine Myself by Teaching Myself: Feature Refinement via Self-Knowledge Distillation
Mingi Ji
Seungjae Shin
Seunghyun Hwang
Gibeom Park
Il-Chul Moon
32
123
0
15 Mar 2021
Distilling Causal Effect of Data in Class-Incremental Learning
Xinting Hu
Kaihua Tang
Chunyan Miao
Xiansheng Hua
Hanwang Zhang
CML
228
175
0
02 Mar 2021
Rethinking Soft Labels for Knowledge Distillation: A Bias-Variance Tradeoff Perspective
Helong Zhou
Liangchen Song
Jiajie Chen
Ye Zhou
Guoli Wang
Junsong Yuan
Qian Zhang
67
174
0
01 Feb 2021
Disentangling Label Distribution for Long-tailed Visual Recognition
Youngkyu Hong
Seungju Han
Kwanghee Choi
Seokjun Seo
Beomsu Kim
Buru Chang
55
237
0
01 Dec 2020
Long-tailed Recognition by Routing Diverse Distribution-Aware Experts
Xudong Wang
Long Lian
Zhongqi Miao
Ziwei Liu
Stella X. Yu
105
388
0
05 Oct 2020
Long-Tailed Classification by Keeping the Good and Removing the Bad Momentum Causal Effect
Kaihua Tang
Jianqiang Huang
Hanwang Zhang
CML
98
446
0
28 Sep 2020
Densely Guided Knowledge Distillation using Multiple Teacher Assistants
Wonchul Son
Jaemin Na
Junyong Choi
Wonjun Hwang
59
115
0
18 Sep 2020
Long-tail learning via logit adjustment
A. Menon
Sadeep Jayasumana
A. S. Rawat
Himanshu Jain
Andreas Veit
Sanjiv Kumar
109
707
0
14 Jul 2020
Self-Knowledge Distillation with Progressive Refinement of Targets
Kyungyul Kim
Byeongmoon Ji
Doyoung Yoon
Sangheum Hwang
ODL
69
180
0
22 Jun 2020
Knowledge Distillation Meets Self-Supervision
Guodong Xu
Ziwei Liu
Xiaoxiao Li
Chen Change Loy
FedML
69
283
0
12 Jun 2020
Knowledge Distillation: A Survey
Jianping Gou
B. Yu
Stephen J. Maybank
Dacheng Tao
VLM
62
2,932
0
09 Jun 2020
Regularizing Class-wise Predictions via Self-knowledge Distillation
Sukmin Yun
Jongjin Park
Kimin Lee
Jinwoo Shin
54
278
0
31 Mar 2020
Spatio-Temporal Graph for Video Captioning with Knowledge Distillation
Boxiao Pan
Haoye Cai
De-An Huang
Kuan-Hui Lee
Adrien Gaidon
Ehsan Adeli
Juan Carlos Niebles
56
236
0
31 Mar 2020
Object Relational Graph with Teacher-Recommended Learning for Video Captioning
Ziqi Zhang
Yaya Shi
Chunfen Yuan
Bing Li
Peijin Wang
Weiming Hu
Zhengjun Zha
VLM
78
272
0
26 Feb 2020
Contrastive Representation Distillation
Yonglong Tian
Dilip Krishnan
Phillip Isola
141
1,045
0
23 Oct 2019
Decoupling Representation and Classifier for Long-Tailed Recognition
Bingyi Kang
Saining Xie
Marcus Rohrbach
Zhicheng Yan
Albert Gordo
Jiashi Feng
Yannis Kalantidis
OODD
172
1,213
0
21 Oct 2019
On the Efficacy of Knowledge Distillation
Ligang He
Rui Mao
92
605
0
03 Oct 2019
Distillation
≈
\approx
≈
Early Stopping? Harvesting Dark Knowledge Utilizing Anisotropic Information Retrieval For Overparameterized Neural Network
Bin Dong
Jikai Hou
Yiping Lu
Zhihua Zhang
64
41
0
02 Oct 2019
Similarity-Preserving Knowledge Distillation
Frederick Tung
Greg Mori
113
973
0
23 Jul 2019
Invariant Risk Minimization
Martín Arjovsky
Léon Bottou
Ishaan Gulrajani
David Lopez-Paz
OOD
177
2,218
0
05 Jul 2019
Distilling Object Detectors with Fine-grained Feature Imitation
Tao Wang
Li-xin Yuan
Xiaopeng Zhang
Jiashi Feng
ObjD
51
381
0
09 Jun 2019
When Does Label Smoothing Help?
Rafael Müller
Simon Kornblith
Geoffrey E. Hinton
UQCV
181
1,938
0
06 Jun 2019
Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distillation
Linfeng Zhang
Jiebo Song
Anni Gao
Jingwei Chen
Chenglong Bao
Kaisheng Ma
FedML
60
857
0
17 May 2019
Knowledge Distillation via Route Constrained Optimization
Xiao Jin
Baoyun Peng
Yichao Wu
Yu Liu
Jiaheng Liu
Ding Liang
Junjie Yan
Xiaolin Hu
63
170
0
19 Apr 2019
Variational Information Distillation for Knowledge Transfer
SungSoo Ahn
S. Hu
Andreas C. Damianou
Neil D. Lawrence
Zhenwen Dai
87
616
0
11 Apr 2019
Relational Knowledge Distillation
Wonpyo Park
Dongju Kim
Yan Lu
Minsu Cho
63
1,405
0
10 Apr 2019
A Comprehensive Overhaul of Feature Distillation
Byeongho Heo
Jeesoo Kim
Sangdoo Yun
Hyojin Park
Nojun Kwak
J. Choi
76
574
0
03 Apr 2019
Correlation Congruence for Knowledge Distillation
Baoyun Peng
Xiao Jin
Jiaheng Liu
Shunfeng Zhou
Yichao Wu
Yu Liu
Dongsheng Li
Zhaoning Zhang
86
510
0
03 Apr 2019
Knowledge Adaptation for Efficient Semantic Segmentation
Tong He
Chunhua Shen
Zhi Tian
Dong Gong
Changming Sun
Youliang Yan
SSeg
40
225
0
12 Mar 2019
Structured Knowledge Distillation for Dense Prediction
Yifan Liu
Chris Liu
Jingdong Wang
Zhenbo Luo
64
582
0
11 Mar 2019
Improved Knowledge Distillation via Teacher Assistant
Seyed Iman Mirzadeh
Mehrdad Farajtabar
Ang Li
Nir Levine
Akihiro Matsukawa
H. Ghasemzadeh
92
1,074
0
09 Feb 2019
Spatial Knowledge Distillation to aid Visual Reasoning
Somak Aditya
Rudra Saha
Yezhou Yang
Chitta Baral
53
15
0
10 Dec 2018
Online Model Distillation for Efficient Video Inference
Ravi Teja Mullapudi
Steven Chen
Keyi Zhang
Deva Ramanan
Kayvon Fatahalian
VGen
57
115
0
06 Dec 2018
Snapshot Distillation: Teacher-Student Optimization in One Generation
Chenglin Yang
Lingxi Xie
Chi Su
Alan Yuille
61
193
0
01 Dec 2018
Knowledge Transfer via Distillation of Activation Boundaries Formed by Hidden Neurons
Byeongho Heo
Minsik Lee
Sangdoo Yun
J. Choi
55
521
0
08 Nov 2018
The Deconfounded Recommender: A Causal Inference Approach to Recommendation
Yixin Wang
Dawen Liang
Laurent Charlin
David M. Blei
CML
55
73
0
20 Aug 2018
ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design
Ningning Ma
Xiangyu Zhang
Haitao Zheng
Jian Sun
162
4,970
0
30 Jul 2018
Bias-Reduced Uncertainty Estimation for Deep Neural Classifiers
Yonatan Geifman
Guy Uziel
Ran El-Yaniv
UQCV
54
140
0
21 May 2018
Born Again Neural Networks
Tommaso Furlanello
Zachary Chase Lipton
Michael Tschannen
Laurent Itti
Anima Anandkumar
68
1,030
0
12 May 2018
Learning Deep Representations with Probabilistic Knowledge Transfer
Nikolaos Passalis
Anastasios Tefas
57
411
0
28 Mar 2018
Paraphrasing Complex Network: Network Compression via Factor Transfer
Jangho Kim
Seonguk Park
Nojun Kwak
66
549
0
14 Feb 2018
MobileNetV2: Inverted Residuals and Linear Bottlenecks
Mark Sandler
Andrew G. Howard
Menglong Zhu
A. Zhmoginov
Liang-Chieh Chen
171
19,204
0
13 Jan 2018
Graph Distillation for Action Detection with Privileged Modalities
Zelun Luo
Jun-Ting Hsieh
Lu Jiang
Juan Carlos Niebles
Li Fei-Fei
73
104
0
30 Nov 2017
Incremental Learning of Object Detectors without Catastrophic Forgetting
K. Shmelkov
Cordelia Schmid
Alahari Karteek
ObjD
71
519
0
23 Aug 2017
ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices
Xiangyu Zhang
Xinyu Zhou
Mengxiao Lin
Jian Sun
AI4TS
132
6,850
0
04 Jul 2017
1
2
Next