ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1904.01802
  4. Cited By
Correlation Congruence for Knowledge Distillation

Correlation Congruence for Knowledge Distillation

3 April 2019
Baoyun Peng
Xiao Jin
Jiaheng Liu
Shunfeng Zhou
Yichao Wu
Yu Liu
Dongsheng Li
Zhaoning Zhang
ArXivPDFHTML

Papers citing "Correlation Congruence for Knowledge Distillation"

50 / 274 papers shown
Title
Improving Knowledge Distillation via Transferring Learning Ability
Improving Knowledge Distillation via Transferring Learning Ability
Long Liu
Tong Li
Hui Cheng
11
1
0
24 Apr 2023
eTag: Class-Incremental Learning with Embedding Distillation and
  Task-Oriented Generation
eTag: Class-Incremental Learning with Embedding Distillation and Task-Oriented Generation
Libo Huang
Yan Zeng
Chuanguang Yang
Zhulin An
Boyu Diao
Yongjun Xu
CLL
14
2
0
20 Apr 2023
Deep Collective Knowledge Distillation
Deep Collective Knowledge Distillation
Jihyeon Seo
Kyusam Oh
Chanho Min
Yongkeun Yun
Sungwoo Cho
19
0
0
18 Apr 2023
SFT-KD-Recon: Learning a Student-friendly Teacher for Knowledge
  Distillation in Magnetic Resonance Image Reconstruction
SFT-KD-Recon: Learning a Student-friendly Teacher for Knowledge Distillation in Magnetic Resonance Image Reconstruction
NagaGayathri Matcha
Sriprabha Ramanarayanan
Mohammad Al Fahim
S. RahulG
Keerthi Ram
M. Sivaprakasam
13
2
0
11 Apr 2023
DisWOT: Student Architecture Search for Distillation WithOut Training
DisWOT: Student Architecture Search for Distillation WithOut Training
Peijie Dong
Lujun Li
Zimian Wei
38
56
0
28 Mar 2023
Understanding the Role of the Projector in Knowledge Distillation
Understanding the Role of the Projector in Knowledge Distillation
Roy Miles
K. Mikolajczyk
27
21
0
20 Mar 2023
Towards a Smaller Student: Capacity Dynamic Distillation for Efficient
  Image Retrieval
Towards a Smaller Student: Capacity Dynamic Distillation for Efficient Image Retrieval
Yi Xie
Huaidong Zhang
Xuemiao Xu
Jianqing Zhu
Shengfeng He
VLM
18
13
0
16 Mar 2023
MetaMixer: A Regularization Strategy for Online Knowledge Distillation
MetaMixer: A Regularization Strategy for Online Knowledge Distillation
Maorong Wang
L. Xiao
T. Yamasaki
KELM
MoE
24
1
0
14 Mar 2023
A Contrastive Knowledge Transfer Framework for Model Compression and
  Transfer Learning
A Contrastive Knowledge Transfer Framework for Model Compression and Transfer Learning
Kaiqi Zhao
Yitao Chen
Ming Zhao
VLM
18
2
0
14 Mar 2023
TAKT: Target-Aware Knowledge Transfer for Whole Slide Image
  Classification
TAKT: Target-Aware Knowledge Transfer for Whole Slide Image Classification
Conghao Xiong
Yi-Mou Lin
Hao Chen
Hao Zheng
Dong Wei
Yefeng Zheng
Joseph J. Y. Sung
Irwin King
29
3
0
10 Mar 2023
Generic-to-Specific Distillation of Masked Autoencoders
Generic-to-Specific Distillation of Masked Autoencoders
Wei Huang
Zhiliang Peng
Li Dong
Furu Wei
Jianbin Jiao
QiXiang Ye
32
22
0
28 Feb 2023
Leveraging Angular Distributions for Improved Knowledge Distillation
Leveraging Angular Distributions for Improved Knowledge Distillation
Eunyeong Jeon
Hongjun Choi
Ankita Shukla
P. Turaga
6
7
0
27 Feb 2023
Graph-based Knowledge Distillation: A survey and experimental evaluation
Graph-based Knowledge Distillation: A survey and experimental evaluation
Jing Liu
Tongya Zheng
Guanzheng Zhang
Qinfen Hao
33
8
0
27 Feb 2023
Knowledge Distillation in Federated Edge Learning: A Survey
Knowledge Distillation in Federated Edge Learning: A Survey
Zhiyuan Wu
Sheng Sun
Yuwei Wang
Min Liu
Xue Jiang
Runhan Li
Bo Gao
FedML
27
4
0
14 Jan 2023
3D Point Cloud Pre-training with Knowledge Distillation from 2D Images
3D Point Cloud Pre-training with Knowledge Distillation from 2D Images
Yuan Yao
Yuanhan Zhang
Zhen-fei Yin
Jiebo Luo
Wanli Ouyang
Xiaoshui Huang
3DPC
29
10
0
17 Dec 2022
Enhancing Low-Density EEG-Based Brain-Computer Interfaces with
  Similarity-Keeping Knowledge Distillation
Enhancing Low-Density EEG-Based Brain-Computer Interfaces with Similarity-Keeping Knowledge Distillation
Xin Huang
Sung-Yu Chen
Chun-Shu Wei
16
0
0
06 Dec 2022
Hint-dynamic Knowledge Distillation
Hint-dynamic Knowledge Distillation
Yiyang Liu
Chenxin Li
Xiaotong Tu
Xinghao Ding
Yue Huang
14
1
0
30 Nov 2022
Curriculum Temperature for Knowledge Distillation
Curriculum Temperature for Knowledge Distillation
Zheng Li
Xiang Li
Lingfeng Yang
Borui Zhao
Renjie Song
Lei Luo
Jun Yu Li
Jian Yang
33
132
0
29 Nov 2022
Class-aware Information for Logit-based Knowledge Distillation
Class-aware Information for Logit-based Knowledge Distillation
Shuoxi Zhang
Hanpeng Liu
J. Hopcroft
Kun He
30
2
0
27 Nov 2022
Structured Knowledge Distillation Towards Efficient and Compact
  Multi-View 3D Detection
Structured Knowledge Distillation Towards Efficient and Compact Multi-View 3D Detection
Linfeng Zhang
Yukang Shi
Hung-Shuo Tai
Zhipeng Zhang
Yuan He
Ke Wang
Kaisheng Ma
18
2
0
14 Nov 2022
An Interpretable Neuron Embedding for Static Knowledge Distillation
An Interpretable Neuron Embedding for Static Knowledge Distillation
Wei Han
Yang Wang
Christian Böhm
Junming Shao
18
0
0
14 Nov 2022
Hilbert Distillation for Cross-Dimensionality Networks
Hilbert Distillation for Cross-Dimensionality Networks
Dian Qin
Haishuai Wang
Zhe Liu
Hongjia Xu
Sheng Zhou
Jiajun Bu
23
4
0
08 Nov 2022
Understanding the Role of Mixup in Knowledge Distillation: An Empirical
  Study
Understanding the Role of Mixup in Knowledge Distillation: An Empirical Study
Hongjun Choi
Eunyeong Jeon
Ankita Shukla
P. Turaga
18
7
0
08 Nov 2022
Teacher-Student Architecture for Knowledge Learning: A Survey
Teacher-Student Architecture for Knowledge Learning: A Survey
Chengming Hu
Xuan Li
Dan Liu
Xi Chen
Ju Wang
Xue Liu
20
35
0
28 Oct 2022
Multimodal Transformer Distillation for Audio-Visual Synchronization
Multimodal Transformer Distillation for Audio-Visual Synchronization
Xuan-Bo Chen
Haibin Wu
Chung-Che Wang
Hung-yi Lee
J. Jang
26
3
0
27 Oct 2022
Improved Feature Distillation via Projector Ensemble
Improved Feature Distillation via Projector Ensemble
Yudong Chen
Sen Wang
Jiajun Liu
Xuwei Xu
Frank de Hoog
Zi Huang
34
37
0
27 Oct 2022
Respecting Transfer Gap in Knowledge Distillation
Respecting Transfer Gap in Knowledge Distillation
Yulei Niu
Long Chen
Chan Zhou
Hanwang Zhang
26
23
0
23 Oct 2022
Are You Stealing My Model? Sample Correlation for Fingerprinting Deep
  Neural Networks
Are You Stealing My Model? Sample Correlation for Fingerprinting Deep Neural Networks
Jiyang Guan
Jian Liang
Ran He
AAML
MLAU
50
29
0
21 Oct 2022
Few-Shot Learning of Compact Models via Task-Specific Meta Distillation
Few-Shot Learning of Compact Models via Task-Specific Meta Distillation
Yong Wu
Shekhor Chanda
M. Hosseinzadeh
Zhi Liu
Yang Wang
VLM
29
7
0
18 Oct 2022
Efficient Knowledge Distillation from Model Checkpoints
Efficient Knowledge Distillation from Model Checkpoints
Chaofei Wang
Qisen Yang
Rui Huang
S. Song
Gao Huang
FedML
14
35
0
12 Oct 2022
Asymmetric Temperature Scaling Makes Larger Networks Teach Well Again
Asymmetric Temperature Scaling Makes Larger Networks Teach Well Again
Xin-Chun Li
Wenxuan Fan
Shaoming Song
Yinchuan Li
Bingshuai Li
Yunfeng Shao
De-Chuan Zhan
42
30
0
10 Oct 2022
Bi-directional Weakly Supervised Knowledge Distillation for Whole Slide
  Image Classification
Bi-directional Weakly Supervised Knowledge Distillation for Whole Slide Image Classification
Linhao Qu
Xiao-Zhuo Luo
Manning Wang
Zhijian Song
WSOD
26
57
0
07 Oct 2022
Meta-Ensemble Parameter Learning
Meta-Ensemble Parameter Learning
Zhengcong Fei
Shuman Tian
Junshi Huang
Xiaoming Wei
Xiaolin K. Wei
OOD
41
2
0
05 Oct 2022
Attention Distillation: self-supervised vision transformer students need
  more guidance
Attention Distillation: self-supervised vision transformer students need more guidance
Kai Wang
Fei Yang
Joost van de Weijer
ViT
30
16
0
03 Oct 2022
Towards a Unified View of Affinity-Based Knowledge Distillation
Towards a Unified View of Affinity-Based Knowledge Distillation
Vladimir Li
A. Maki
6
0
0
30 Sep 2022
Slimmable Networks for Contrastive Self-supervised Learning
Slimmable Networks for Contrastive Self-supervised Learning
Shuai Zhao
Xiaohan Wang
Linchao Zhu
Yi Yang
35
1
0
30 Sep 2022
CES-KD: Curriculum-based Expert Selection for Guided Knowledge
  Distillation
CES-KD: Curriculum-based Expert Selection for Guided Knowledge Distillation
Ibtihel Amara
M. Ziaeefard
B. Meyer
W. Gross
J. Clark
18
4
0
15 Sep 2022
Transformer-CNN Cohort: Semi-supervised Semantic Segmentation by the
  Best of Both Students
Transformer-CNN Cohort: Semi-supervised Semantic Segmentation by the Best of Both Students
Xueye Zheng
Yuan Luo
Hao Wang
Chong Fu
Lin Wang
ViT
41
18
0
06 Sep 2022
FAKD: Feature Augmented Knowledge Distillation for Semantic Segmentation
FAKD: Feature Augmented Knowledge Distillation for Semantic Segmentation
Jianlong Yuan
Qiang Qi
Fei Du
Zhibin Wang
Fan Wang
Yifan Liu
27
13
0
30 Aug 2022
CMD: Self-supervised 3D Action Representation Learning with Cross-modal
  Mutual Distillation
CMD: Self-supervised 3D Action Representation Learning with Cross-modal Mutual Distillation
Yunyao Mao
Wen-gang Zhou
Zhenbo Lu
Jiajun Deng
Houqiang Li
30
38
0
26 Aug 2022
Mind the Gap in Distilling StyleGANs
Mind the Gap in Distilling StyleGANs
Guodong Xu
Yuenan Hou
Ziwei Liu
Chen Change Loy
GAN
35
13
0
18 Aug 2022
Progressive Cross-modal Knowledge Distillation for Human Action
  Recognition
Progressive Cross-modal Knowledge Distillation for Human Action Recognition
Jianyuan Ni
A. Ngu
Yan Yan
HAI
22
20
0
17 Aug 2022
Class-Incremental Learning with Cross-Space Clustering and Controlled
  Transfer
Class-Incremental Learning with Cross-Space Clustering and Controlled Transfer
Arjun Ashok
K. J. Joseph
V. Balasubramanian
CLL
22
29
0
07 Aug 2022
HIRE: Distilling High-order Relational Knowledge From Heterogeneous
  Graph Neural Networks
HIRE: Distilling High-order Relational Knowledge From Heterogeneous Graph Neural Networks
Jing Liu
Tongya Zheng
Qinfen Hao
29
7
0
25 Jul 2022
Online Knowledge Distillation via Mutual Contrastive Learning for Visual
  Recognition
Online Knowledge Distillation via Mutual Contrastive Learning for Visual Recognition
Chuanguang Yang
Zhulin An
Helong Zhou
Fuzhen Zhuang
Yongjun Xu
Qian Zhang
39
50
0
23 Jul 2022
Irrelevant Pixels are Everywhere: Find and Exclude Them for More
  Efficient Computer Vision
Irrelevant Pixels are Everywhere: Find and Exclude Them for More Efficient Computer Vision
Caleb Tung
Abhinav Goel
Xiao Hu
Nick Eliopoulos
Emmanuel S. Amobi
George K. Thiruvathukal
Vipin Chaudhary
Yu Lu
12
3
0
21 Jul 2022
KD-MVS: Knowledge Distillation Based Self-supervised Learning for
  Multi-view Stereo
KD-MVS: Knowledge Distillation Based Self-supervised Learning for Multi-view Stereo
Yikang Ding
Qingtian Zhu
Xiangyue Liu
Wentao Yuan
Haotian Zhang
Chi Zhang
17
23
0
21 Jul 2022
Deep Semantic Statistics Matching (D2SM) Denoising Network
Deep Semantic Statistics Matching (D2SM) Denoising Network
Kangfu Mei
Vishal M. Patel
Rui Huang
DiffM
18
8
0
19 Jul 2022
HEAD: HEtero-Assists Distillation for Heterogeneous Object Detectors
HEAD: HEtero-Assists Distillation for Heterogeneous Object Detectors
Luting Wang
Xiaojie Li
Yue Liao
Jiang
Jianlong Wu
Fei-Yue Wang
Chao Qian
Si Liu
22
20
0
12 Jul 2022
Contrastive Deep Supervision
Contrastive Deep Supervision
Linfeng Zhang
Xin Chen
Junbo Zhang
Runpei Dong
Kaisheng Ma
72
28
0
12 Jul 2022
Previous
123456
Next