ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1904.01802
  4. Cited By
Correlation Congruence for Knowledge Distillation

Correlation Congruence for Knowledge Distillation

3 April 2019
Baoyun Peng
Xiao Jin
Jiaheng Liu
Shunfeng Zhou
Yichao Wu
Yu Liu
Dongsheng Li
Zhaoning Zhang
ArXivPDFHTML

Papers citing "Correlation Congruence for Knowledge Distillation"

24 / 274 papers shown
Title
Prime-Aware Adaptive Distillation
Prime-Aware Adaptive Distillation
Youcai Zhang
Zhonghao Lan
Yuchen Dai
Fangao Zeng
Yan Bai
Jie Chang
Yichen Wei
15
40
0
04 Aug 2020
Learning to Learn Parameterized Classification Networks for Scalable
  Input Images
Learning to Learn Parameterized Classification Networks for Scalable Input Images
Duo Li
Anbang Yao
Qifeng Chen
22
11
0
13 Jul 2020
Multi-fidelity Neural Architecture Search with Knowledge Distillation
Multi-fidelity Neural Architecture Search with Knowledge Distillation
I. Trofimov
Nikita Klyuchnikov
Mikhail Salnikov
Alexander N. Filippov
Evgeny Burnaev
32
15
0
15 Jun 2020
Knowledge Distillation Meets Self-Supervision
Knowledge Distillation Meets Self-Supervision
Guodong Xu
Ziwei Liu
Xiaoxiao Li
Chen Change Loy
FedML
37
280
0
12 Jun 2020
Adjoined Networks: A Training Paradigm with Applications to Network
  Compression
Adjoined Networks: A Training Paradigm with Applications to Network Compression
Utkarsh Nath
Shrinu Kushagra
Yingzhen Yang
24
2
0
10 Jun 2020
Knowledge Distillation: A Survey
Knowledge Distillation: A Survey
Jianping Gou
B. Yu
Stephen J. Maybank
Dacheng Tao
VLM
19
2,843
0
09 Jun 2020
ResKD: Residual-Guided Knowledge Distillation
ResKD: Residual-Guided Knowledge Distillation
Xuewei Li
Songyuan Li
Bourahla Omar
Fei Wu
Xi Li
21
47
0
08 Jun 2020
Multi-view Contrastive Learning for Online Knowledge Distillation
Multi-view Contrastive Learning for Online Knowledge Distillation
Chuanguang Yang
Zhulin An
Yongjun Xu
17
23
0
07 Jun 2020
Channel Distillation: Channel-Wise Attention for Knowledge Distillation
Channel Distillation: Channel-Wise Attention for Knowledge Distillation
Zaida Zhou
Chaoran Zhuge
Xinwei Guan
Wen Liu
11
49
0
02 Jun 2020
Inter-Region Affinity Distillation for Road Marking Segmentation
Inter-Region Affinity Distillation for Road Marking Segmentation
Yuenan Hou
Zheng Ma
Chunxiao Liu
Tak-Wai Hui
Chen Change Loy
36
121
0
11 Apr 2020
Teacher-Class Network: A Neural Network Compression Mechanism
Teacher-Class Network: A Neural Network Compression Mechanism
Shaiq Munir Malik
Muhammad Umair Haider
Fnu Mohbat
Musab Rasheed
M. Taj
12
5
0
07 Apr 2020
Neural Networks Are More Productive Teachers Than Human Raters: Active
  Mixup for Data-Efficient Knowledge Distillation from a Blackbox Model
Neural Networks Are More Productive Teachers Than Human Raters: Active Mixup for Data-Efficient Knowledge Distillation from a Blackbox Model
Dongdong Wang
Yandong Li
Liqiang Wang
Boqing Gong
24
48
0
31 Mar 2020
A Survey of Methods for Low-Power Deep Learning and Computer Vision
A Survey of Methods for Low-Power Deep Learning and Computer Vision
Abhinav Goel
Caleb Tung
Yung-Hsiang Lu
George K. Thiruvathukal
VLM
10
92
0
24 Mar 2020
Collaborative Distillation for Ultra-Resolution Universal Style Transfer
Collaborative Distillation for Ultra-Resolution Universal Style Transfer
Huan Wang
Yijun Li
Yuehai Wang
Haoji Hu
Ming-Hsuan Yang
115
103
0
18 Mar 2020
DEPARA: Deep Attribution Graph for Deep Knowledge Transferability
DEPARA: Deep Attribution Graph for Deep Knowledge Transferability
Mingli Song
Yixin Chen
Jingwen Ye
Xinchao Wang
Chengchao Shen
Feng Mao
Xiuming Zhang
17
29
0
17 Mar 2020
SuperMix: Supervising the Mixing Data Augmentation
SuperMix: Supervising the Mixing Data Augmentation
Ali Dabouei
Sobhan Soleymani
Fariborz Taherkhani
Nasser M. Nasrabadi
13
98
0
10 Mar 2020
Knowledge distillation via adaptive instance normalization
Knowledge distillation via adaptive instance normalization
Jing Yang
Brais Martínez
Adrian Bulat
Georgios Tzimiropoulos
21
23
0
09 Mar 2020
Subclass Distillation
Subclass Distillation
Rafael Müller
Simon Kornblith
Geoffrey E. Hinton
26
33
0
10 Feb 2020
QUEST: Quantized embedding space for transferring knowledge
QUEST: Quantized embedding space for transferring knowledge
Himalaya Jain
Spyros Gidaris
N. Komodakis
P. Pérez
Matthieu Cord
16
14
0
03 Dec 2019
Search to Distill: Pearls are Everywhere but not the Eyes
Search to Distill: Pearls are Everywhere but not the Eyes
Yu Liu
Xuhui Jia
Mingxing Tan
Raviteja Vemulapalli
Yukun Zhu
Bradley Green
Xiaogang Wang
30
67
0
20 Nov 2019
Contrastive Representation Distillation
Contrastive Representation Distillation
Yonglong Tian
Dilip Krishnan
Phillip Isola
47
1,030
0
23 Oct 2019
Extreme Low Resolution Activity Recognition with Confident
  Spatial-Temporal Attention Transfer
Extreme Low Resolution Activity Recognition with Confident Spatial-Temporal Attention Transfer
Yucai Bai
Qinglong Zou
Xieyuanli Chen
Lingxi Li
Zhengming Ding
Long Chen
18
3
0
09 Sep 2019
Triplet Distillation for Deep Face Recognition
Triplet Distillation for Deep Face Recognition
Yushu Feng
Huan Wang
Daniel T. Yi
Roland Hu
CVBM
9
45
0
11 May 2019
Large scale distributed neural network training through online
  distillation
Large scale distributed neural network training through online distillation
Rohan Anil
Gabriel Pereyra
Alexandre Passos
Róbert Ormándi
George E. Dahl
Geoffrey E. Hinton
FedML
278
404
0
09 Apr 2018
Previous
123456