ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2203.08679
  4. Cited By
Decoupled Knowledge Distillation
v1v2 (latest)

Decoupled Knowledge Distillation

16 March 2022
Borui Zhao
Quan Cui
Renjie Song
Yiyu Qiu
Jiajun Liang
ArXiv (abs)PDFHTMLGithub (855★)

Papers citing "Decoupled Knowledge Distillation"

50 / 262 papers shown
Title
Generalizable Heterogeneous Federated Cross-Correlation and Instance
  Similarity Learning
Generalizable Heterogeneous Federated Cross-Correlation and Instance Similarity Learning
Wenke Huang
J. J. Valero-Mas
Dasaem Jeong
Bo Du
FedML
82
50
0
28 Sep 2023
Emphasized Non-Target Speaker Knowledge in Knowledge Distillation for
  Automatic Speaker Verification
Emphasized Non-Target Speaker Knowledge in Knowledge Distillation for Automatic Speaker Verification
Duc-Tuan Truong
Ruijie Tao
J. Yip
Kong Aik Lee
Chng Eng Siong
71
6
0
26 Sep 2023
Data Upcycling Knowledge Distillation for Image Super-Resolution
Data Upcycling Knowledge Distillation for Image Super-Resolution
Yun-feng Zhang
Wei Li
Simiao Li
Hanting Chen
Zhaopeng Tu
Wenjun Wang
Bingyi Jing
Hai-lin Wang
Jie Hu
67
3
0
25 Sep 2023
Graph-enhanced Optimizers for Structure-aware Recommendation Embedding
  Evolution
Graph-enhanced Optimizers for Structure-aware Recommendation Embedding Evolution
Cong Xu
Jun Wang
Jianyong Wang
Wei Zhang
GNN
66
1
0
24 Sep 2023
Weight Averaging Improves Knowledge Distillation under Domain Shift
Weight Averaging Improves Knowledge Distillation under Domain Shift
Valeriy Berezovskiy
Nikita Morozov
MoMe
78
1
0
20 Sep 2023
Towards Real-Time Neural Video Codec for Cross-Platform Application
  Using Calibration Information
Towards Real-Time Neural Video Codec for Cross-Platform Application Using Calibration Information
Kuan Tian
Yonghang Guan
Jin-Peng Xiang
Jun Zhang
Xiao Han
Wei Yang
68
7
0
20 Sep 2023
Distilling HuBERT with LSTMs via Decoupled Knowledge Distillation
Distilling HuBERT with LSTMs via Decoupled Knowledge Distillation
Danilo de Oliveira
Timo Gerkmann
VLM
64
6
0
18 Sep 2023
Rethinking Momentum Knowledge Distillation in Online Continual Learning
Rethinking Momentum Knowledge Distillation in Online Continual Learning
Nicolas Michel
Maorong Wang
L. Xiao
T. Yamasaki
CLL
100
11
0
06 Sep 2023
Knowledge Distillation Layer that Lets the Student Decide
Knowledge Distillation Layer that Lets the Student Decide
Ada Gorgun
Y. Z. Gürbüz
A. Aydin Alatan
60
0
0
06 Sep 2023
Bridging Cross-task Protocol Inconsistency for Distillation in Dense
  Object Detection
Bridging Cross-task Protocol Inconsistency for Distillation in Dense Object Detection
Longrong Yang
Xianpan Zhou
Xuewei Li
Liang Qiao
Zheyang Li
Zi-Liang Yang
Gaoang Wang
Xi Li
102
24
0
28 Aug 2023
Dynamic Residual Classifier for Class Incremental Learning
Dynamic Residual Classifier for Class Incremental Learning
Xiu-yan Chen
Xiaobin Chang
82
18
0
25 Aug 2023
Semi-Supervised Learning via Weight-aware Distillation under Class
  Distribution Mismatch
Semi-Supervised Learning via Weight-aware Distillation under Class Distribution Mismatch
Pan Du
Suyun Zhao
Zisen Sheng
Cuiping Li
Hong Chen
62
8
0
23 Aug 2023
Omnidirectional Information Gathering for Knowledge Transfer-based
  Audio-Visual Navigation
Omnidirectional Information Gathering for Knowledge Transfer-based Audio-Visual Navigation
Jinyu Chen
Wenguan Wang
Siying Liu
Hongsheng Li
Yi Yang
98
8
0
20 Aug 2023
DomainAdaptor: A Novel Approach to Test-time Adaptation
DomainAdaptor: A Novel Approach to Test-time Adaptation
Jian Zhang
Lei Qi
Yinghuan Shi
Yang Gao
OODTTA
88
17
0
20 Aug 2023
A Survey on Model Compression for Large Language Models
A Survey on Model Compression for Large Language Models
Xunyu Zhu
Jian Li
Yong Liu
Can Ma
Weiping Wang
139
233
0
15 Aug 2023
CTP: Towards Vision-Language Continual Pretraining via Compatible
  Momentum Contrast and Topology Preservation
CTP: Towards Vision-Language Continual Pretraining via Compatible Momentum Contrast and Topology Preservation
Hongguang Zhu
Yunchao Wei
Xiaodan Liang
Chunjie Zhang
Yao-Min Zhao
VLM
72
30
0
14 Aug 2023
MixBCT: Towards Self-Adapting Backward-Compatible Training
MixBCT: Towards Self-Adapting Backward-Compatible Training
Yuefeng Liang
Yufeng Zhang
Shiliang Zhang
Yaowei Wang
Shengze Xiao
KenLi Li
Xiaoyu Wang
71
1
0
14 Aug 2023
Estimator Meets Equilibrium Perspective: A Rectified Straight Through
  Estimator for Binary Neural Networks Training
Estimator Meets Equilibrium Perspective: A Rectified Straight Through Estimator for Binary Neural Networks Training
Xiao-Ming Wu
Dian Zheng
Zuhao Liu
Weishi Zheng
MQ
118
18
0
13 Aug 2023
Multi-Label Knowledge Distillation
Multi-Label Knowledge Distillation
Penghui Yang
Ming-Kun Xie
Chen-Chen Zong
Lei Feng
Gang Niu
Masashi Sugiyama
Sheng-Jun Huang
82
10
0
12 Aug 2023
Teacher-Student Architecture for Knowledge Distillation: A Survey
Teacher-Student Architecture for Knowledge Distillation: A Survey
Chengming Hu
Xuan Li
Danyang Liu
Haolun Wu
Xi Chen
Ju Wang
Xue Liu
92
19
0
08 Aug 2023
Towards Better Query Classification with Multi-Expert Knowledge
  Condensation in JD Ads Search
Towards Better Query Classification with Multi-Expert Knowledge Condensation in JD Ads Search
Kun-Peng Ning
Ming Pang
Zheng Fang
Xue Jiang
Xi-Wei Zhao
Changping Peng
Zhangang Lin
Jinghe Hu
Jingping Shao
116
0
0
02 Aug 2023
NormKD: Normalized Logits for Knowledge Distillation
NormKD: Normalized Logits for Knowledge Distillation
Zhihao Chi
Tu Zheng
Hengjia Li
Zheng Yang
Boxi Wu
Binbin Lin
D. Cai
82
14
0
01 Aug 2023
Fundus-Enhanced Disease-Aware Distillation Model for Retinal Disease
  Classification from OCT Images
Fundus-Enhanced Disease-Aware Distillation Model for Retinal Disease Classification from OCT Images
Lehan Wang
Weihang Dai
Mei Jin
Chubin Ou
Xuelong Li
62
5
0
01 Aug 2023
BearingPGA-Net: A Lightweight and Deployable Bearing Fault Diagnosis
  Network via Decoupled Knowledge Distillation and FPGA Acceleration
BearingPGA-Net: A Lightweight and Deployable Bearing Fault Diagnosis Network via Decoupled Knowledge Distillation and FPGA Acceleration
Jing-Xiao Liao
Shenghui Wei
Chengyun Xie
T. Zeng
Jinwei Sun
Shiping Zhang
Xiaoge Zhang
Fenglei Fan
26
15
0
31 Jul 2023
Effective Whole-body Pose Estimation with Two-stages Distillation
Effective Whole-body Pose Estimation with Two-stages Distillation
Zhendong Yang
Ailing Zeng
Chun Yuan
Yu Li
136
181
0
29 Jul 2023
Contrastive Knowledge Amalgamation for Unsupervised Image Classification
Contrastive Knowledge Amalgamation for Unsupervised Image Classification
Shangde Gao
Yichao Fu
Li-Yu Daisy Liu
Yuqiang Han
66
8
0
27 Jul 2023
Class-relation Knowledge Distillation for Novel Class Discovery
Class-relation Knowledge Distillation for Novel Class Discovery
Peiyan Gu
Chuyu Zhang
Rui Xu
Xuming He
80
17
0
18 Jul 2023
Cumulative Spatial Knowledge Distillation for Vision Transformers
Cumulative Spatial Knowledge Distillation for Vision Transformers
Borui Zhao
Renjie Song
Jiajun Liang
64
15
0
17 Jul 2023
DOT: A Distillation-Oriented Trainer
DOT: A Distillation-Oriented Trainer
Borui Zhao
Quan Cui
Renjie Song
Jiajun Liang
60
7
0
17 Jul 2023
The Staged Knowledge Distillation in Video Classification: Harmonizing
  Student Progress by a Complementary Weakly Supervised Framework
The Staged Knowledge Distillation in Video Classification: Harmonizing Student Progress by a Complementary Weakly Supervised Framework
Chao Wang
Zhenghang Tang
85
2
0
11 Jul 2023
Make A Long Image Short: Adaptive Token Length for Vision Transformers
Make A Long Image Short: Adaptive Token Length for Vision Transformers
Yuqin Zhu
Yichen Zhu
ViT
121
17
0
05 Jul 2023
Review of Large Vision Models and Visual Prompt Engineering
Review of Large Vision Models and Visual Prompt Engineering
Jiaqi Wang
Zheng Liu
Lin Zhao
Zihao Wu
Chong Ma
...
Bao Ge
Yixuan Yuan
Dinggang Shen
Tianming Liu
Shu Zhang
VLMLRM
155
162
0
03 Jul 2023
Review helps learn better: Temporal Supervised Knowledge Distillation
Review helps learn better: Temporal Supervised Knowledge Distillation
Dongwei Wang
Zhi Han
Yanmei Wang
Xi’ai Chen
Baichen Liu
Yandong Tang
146
1
0
03 Jul 2023
Enhancing Mapless Trajectory Prediction through Knowledge Distillation
Enhancing Mapless Trajectory Prediction through Knowledge Distillation
Yuning Wang
Pu Zhang
Lei Bai
Jianru Xue
80
4
0
25 Jun 2023
CrossKD: Cross-Head Knowledge Distillation for Object Detection
CrossKD: Cross-Head Knowledge Distillation for Object Detection
Jiabao Wang
Yuming Chen
Zhaohui Zheng
Xiang Li
Ming-Ming Cheng
Qibin Hou
160
40
0
20 Jun 2023
Categories of Response-Based, Feature-Based, and Relation-Based
  Knowledge Distillation
Categories of Response-Based, Feature-Based, and Relation-Based Knowledge Distillation
Chuanguang Yang
Xinqiang Yu
Zhulin An
Yongjun Xu
VLMOffRL
184
26
0
19 Jun 2023
VIPriors 3: Visual Inductive Priors for Data-Efficient Deep Learning
  Challenges
VIPriors 3: Visual Inductive Priors for Data-Efficient Deep Learning Challenges
Robert-Jan Bruintjes
A. Lengyel
Marcos Baptista-Rios
O. Kayhan
Davide Zambrano
Nergis Tomen
Jan van Gemert
65
9
0
31 May 2023
Are Large Kernels Better Teachers than Transformers for ConvNets?
Are Large Kernels Better Teachers than Transformers for ConvNets?
Tianjin Huang
Lu Yin
Zhenyu Zhang
Lijuan Shen
Meng Fang
Mykola Pechenizkiy
Zhangyang Wang
Shiwei Liu
90
13
0
30 May 2023
Semi-supervised Pathological Image Segmentation via Cross Distillation
  of Multiple Attentions
Semi-supervised Pathological Image Segmentation via Cross Distillation of Multiple Attentions
Lanfeng Zhong
Xin Liao
Shaoting Zhang
Guotai Wang
26
16
0
30 May 2023
Test-Time Adaptation with CLIP Reward for Zero-Shot Generalization in
  Vision-Language Models
Test-Time Adaptation with CLIP Reward for Zero-Shot Generalization in Vision-Language Models
Shuai Zhao
Xiaohan Wang
Linchao Zhu
Yezhou Yang
VLM
104
23
0
29 May 2023
Improving Knowledge Distillation via Regularizing Feature Norm and
  Direction
Improving Knowledge Distillation via Regularizing Feature Norm and Direction
Yuzhu Wang
Lechao Cheng
Manni Duan
Yongheng Wang
Zunlei Feng
Shu Kong
95
22
0
26 May 2023
Triplet Knowledge Distillation
Triplet Knowledge Distillation
Xijun Wang
Dongyang Liu
Meina Kan
Chunrui Han
Zhongqin Wu
Shiguang Shan
68
3
0
25 May 2023
VanillaKD: Revisit the Power of Vanilla Knowledge Distillation from
  Small Scale to Large Scale
VanillaKD: Revisit the Power of Vanilla Knowledge Distillation from Small Scale to Large Scale
Zhiwei Hao
Jianyuan Guo
Kai Han
Han Hu
Chang Xu
Yunhe Wang
70
16
0
25 May 2023
Knowledge Diffusion for Distillation
Knowledge Diffusion for Distillation
Tao Huang
Yuan Zhang
Mingkai Zheng
Shan You
Fei Wang
Chao Qian
Chang Xu
108
56
0
25 May 2023
Deakin RF-Sensing: Experiments on Correlated Knowledge Distillation for
  Monitoring Human Postures with Radios
Deakin RF-Sensing: Experiments on Correlated Knowledge Distillation for Monitoring Human Postures with Radios
Shiva Raj Pokhrel
Jonathan Kua
Deol Satish
Phil Williams
A. Zaslavsky
S. W. Loke
Jinho Choi
87
5
0
24 May 2023
Decoupled Kullback-Leibler Divergence Loss
Decoupled Kullback-Leibler Divergence Loss
Jiequan Cui
Zhuotao Tian
Zhisheng Zhong
Xiaojuan Qi
Bei Yu
Hanwang Zhang
78
45
0
23 May 2023
Is Synthetic Data From Diffusion Models Ready for Knowledge
  Distillation?
Is Synthetic Data From Diffusion Models Ready for Knowledge Distillation?
Zheng Li
Yuxuan Li
Penghai Zhao
Renjie Song
Xiang Li
Jian Yang
97
20
0
22 May 2023
Lifting the Curse of Capacity Gap in Distilling Language Models
Lifting the Curse of Capacity Gap in Distilling Language Models
Chen Zhang
Yang Yang
Jiahao Liu
Jingang Wang
Yunsen Xian
Benyou Wang
Dawei Song
MoE
69
20
0
20 May 2023
Student-friendly Knowledge Distillation
Student-friendly Knowledge Distillation
Mengyang Yuan
Bo Lang
Fengnan Quan
92
21
0
18 May 2023
Lightweight Self-Knowledge Distillation with Multi-source Information
  Fusion
Lightweight Self-Knowledge Distillation with Multi-source Information Fusion
Xucong Wang
Pengchao Han
Lei Guo
54
1
0
16 May 2023
Previous
123456
Next