Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1904.01802
Cited By
Correlation Congruence for Knowledge Distillation
3 April 2019
Baoyun Peng
Xiao Jin
Jiaheng Liu
Shunfeng Zhou
Yichao Wu
Yu Liu
Dongsheng Li
Zhaoning Zhang
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Correlation Congruence for Knowledge Distillation"
24 / 274 papers shown
Title
Prime-Aware Adaptive Distillation
Youcai Zhang
Zhonghao Lan
Yuchen Dai
Fangao Zeng
Yan Bai
Jie Chang
Yichen Wei
15
40
0
04 Aug 2020
Learning to Learn Parameterized Classification Networks for Scalable Input Images
Duo Li
Anbang Yao
Qifeng Chen
22
11
0
13 Jul 2020
Multi-fidelity Neural Architecture Search with Knowledge Distillation
I. Trofimov
Nikita Klyuchnikov
Mikhail Salnikov
Alexander N. Filippov
Evgeny Burnaev
32
15
0
15 Jun 2020
Knowledge Distillation Meets Self-Supervision
Guodong Xu
Ziwei Liu
Xiaoxiao Li
Chen Change Loy
FedML
37
280
0
12 Jun 2020
Adjoined Networks: A Training Paradigm with Applications to Network Compression
Utkarsh Nath
Shrinu Kushagra
Yingzhen Yang
24
2
0
10 Jun 2020
Knowledge Distillation: A Survey
Jianping Gou
B. Yu
Stephen J. Maybank
Dacheng Tao
VLM
19
2,843
0
09 Jun 2020
ResKD: Residual-Guided Knowledge Distillation
Xuewei Li
Songyuan Li
Bourahla Omar
Fei Wu
Xi Li
21
47
0
08 Jun 2020
Multi-view Contrastive Learning for Online Knowledge Distillation
Chuanguang Yang
Zhulin An
Yongjun Xu
17
23
0
07 Jun 2020
Channel Distillation: Channel-Wise Attention for Knowledge Distillation
Zaida Zhou
Chaoran Zhuge
Xinwei Guan
Wen Liu
11
49
0
02 Jun 2020
Inter-Region Affinity Distillation for Road Marking Segmentation
Yuenan Hou
Zheng Ma
Chunxiao Liu
Tak-Wai Hui
Chen Change Loy
36
121
0
11 Apr 2020
Teacher-Class Network: A Neural Network Compression Mechanism
Shaiq Munir Malik
Muhammad Umair Haider
Fnu Mohbat
Musab Rasheed
M. Taj
12
5
0
07 Apr 2020
Neural Networks Are More Productive Teachers Than Human Raters: Active Mixup for Data-Efficient Knowledge Distillation from a Blackbox Model
Dongdong Wang
Yandong Li
Liqiang Wang
Boqing Gong
24
48
0
31 Mar 2020
A Survey of Methods for Low-Power Deep Learning and Computer Vision
Abhinav Goel
Caleb Tung
Yung-Hsiang Lu
George K. Thiruvathukal
VLM
10
92
0
24 Mar 2020
Collaborative Distillation for Ultra-Resolution Universal Style Transfer
Huan Wang
Yijun Li
Yuehai Wang
Haoji Hu
Ming-Hsuan Yang
115
103
0
18 Mar 2020
DEPARA: Deep Attribution Graph for Deep Knowledge Transferability
Mingli Song
Yixin Chen
Jingwen Ye
Xinchao Wang
Chengchao Shen
Feng Mao
Xiuming Zhang
17
29
0
17 Mar 2020
SuperMix: Supervising the Mixing Data Augmentation
Ali Dabouei
Sobhan Soleymani
Fariborz Taherkhani
Nasser M. Nasrabadi
13
98
0
10 Mar 2020
Knowledge distillation via adaptive instance normalization
Jing Yang
Brais Martínez
Adrian Bulat
Georgios Tzimiropoulos
21
23
0
09 Mar 2020
Subclass Distillation
Rafael Müller
Simon Kornblith
Geoffrey E. Hinton
26
33
0
10 Feb 2020
QUEST: Quantized embedding space for transferring knowledge
Himalaya Jain
Spyros Gidaris
N. Komodakis
P. Pérez
Matthieu Cord
16
14
0
03 Dec 2019
Search to Distill: Pearls are Everywhere but not the Eyes
Yu Liu
Xuhui Jia
Mingxing Tan
Raviteja Vemulapalli
Yukun Zhu
Bradley Green
Xiaogang Wang
30
67
0
20 Nov 2019
Contrastive Representation Distillation
Yonglong Tian
Dilip Krishnan
Phillip Isola
47
1,030
0
23 Oct 2019
Extreme Low Resolution Activity Recognition with Confident Spatial-Temporal Attention Transfer
Yucai Bai
Qinglong Zou
Xieyuanli Chen
Lingxi Li
Zhengming Ding
Long Chen
18
3
0
09 Sep 2019
Triplet Distillation for Deep Face Recognition
Yushu Feng
Huan Wang
Daniel T. Yi
Roland Hu
CVBM
9
45
0
11 May 2019
Large scale distributed neural network training through online distillation
Rohan Anil
Gabriel Pereyra
Alexandre Passos
Róbert Ormándi
George E. Dahl
Geoffrey E. Hinton
FedML
278
404
0
09 Apr 2018
Previous
1
2
3
4
5
6