Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2203.08679
Cited By
v1
v2 (latest)
Decoupled Knowledge Distillation
16 March 2022
Borui Zhao
Quan Cui
Renjie Song
Yiyu Qiu
Jiajun Liang
Re-assign community
ArXiv (abs)
PDF
HTML
Github (855★)
Papers citing
"Decoupled Knowledge Distillation"
50 / 262 papers shown
Title
Robust Saliency-Aware Distillation for Few-shot Fine-grained Visual Recognition
Haiqi Liu
Chong Chen
Xinrong Gong
Tong Zhang
77
12
0
12 May 2023
Visual Tuning
Bruce X. B. Yu
Jianlong Chang
Haixin Wang
Lin Liu
Shijie Wang
...
Lingxi Xie
Haojie Li
Zhouchen Lin
Qi Tian
Chang Wen Chen
VLM
171
41
0
10 May 2023
Do Not Blindly Imitate the Teacher: Using Perturbed Loss for Knowledge Distillation
Rongzhi Zhang
Jiaming Shen
Tianqi Liu
Jia-Ling Liu
Michael Bendersky
Marc Najork
Chao Zhang
104
20
0
08 May 2023
Refined Response Distillation for Class-Incremental Player Detection
Liang Bai
Hangjie Yuan
Tao Feng
Hong Song
Jian Yang
94
0
0
01 May 2023
LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions
Minghao Wu
Abdul Waheed
Chiyu Zhang
Muhammad Abdul-Mageed
Alham Fikri Aji
ALM
204
128
0
27 Apr 2023
Class Attention Transfer Based Knowledge Distillation
Ziyao Guo
Haonan Yan
Hui Li
Xiao-La Lin
62
69
0
25 Apr 2023
Model Conversion via Differentially Private Data-Free Distillation
Bochao Liu
Pengju Wang
Shikun Li
Dan Zeng
Shiming Ge
FedML
55
3
0
25 Apr 2023
Improving Knowledge Distillation via Transferring Learning Ability
Long Liu
Tong Li
Hui Cheng
15
1
0
24 Apr 2023
Function-Consistent Feature Distillation
Dongyang Liu
Meina Kan
Shiguang Shan
Xilin Chen
110
19
0
24 Apr 2023
LiDAR2Map: In Defense of LiDAR-Based Semantic Map Construction Using Online Camera Distillation
Song Wang
Wentong Li
Wenyu Liu
Xiaolu Liu
Jianke Zhu
109
19
0
22 Apr 2023
DeepReShape: Redesigning Neural Networks for Efficient Private Inference
N. Jha
Brandon Reagen
106
10
0
20 Apr 2023
Decouple Non-parametric Knowledge Distillation For End-to-end Speech Translation
Hao Zhang
Nianwen Si
Yaqi Chen
Wenlin Zhang
Xukui Yang
Dan Qu
Zhen Li
52
4
0
20 Apr 2023
Knowledge Distillation Under Ideal Joint Classifier Assumption
Huayu Li
Xiwen Chen
G. Ditzler
Janet Roveda
Ao Li
42
1
0
19 Apr 2023
Wild Face Anti-Spoofing Challenge 2023: Benchmark and Results
Dong Wang
Jiaxin Guo
Qiqi Shao
Haochi He
Zhian Chen
...
Sergio Escalera
Hugo Jair Escalante
Lei Zhen
Jun Wan
Jiankang Deng
CVBM
AAML
58
11
0
12 Apr 2023
Grouped Knowledge Distillation for Deep Face Recognition
Weisong Zhao
Xiangyu Zhu
Kaiwen Guo
Xiaoyu Zhang
Zhen Lei
CVBM
68
6
0
10 Apr 2023
Towards Efficient Task-Driven Model Reprogramming with Foundation Models
Shoukai Xu
Jiangchao Yao
Ran Luo
Shuhai Zhang
Zihao Lian
Mingkui Tan
Bo Han
Yaowei Wang
95
6
0
05 Apr 2023
Long-Tailed Visual Recognition via Self-Heterogeneous Integration with Knowledge Excavation
Yang Jin
Mengke Li
Yang Lu
Y. Cheung
Hanzi Wang
97
26
0
03 Apr 2023
Decomposed Cross-modal Distillation for RGB-based Temporal Action Detection
Pilhyeon Lee
Taeoh Kim
Minho Shim
Dongyoon Wee
H. Byun
85
11
0
30 Mar 2023
DisWOT: Student Architecture Search for Distillation WithOut Training
Peijie Dong
Lujun Li
Zimian Wei
87
58
0
28 Mar 2023
Decoupled Multimodal Distilling for Emotion Recognition
Yong Li
Yuan-Zheng Wang
Zhen Cui
91
82
0
24 Mar 2023
A Simple and Generic Framework for Feature Distillation via Channel-wise Transformation
Ziwei Liu
Yongtao Wang
Xiaojie Chu
75
6
0
23 Mar 2023
From Knowledge Distillation to Self-Knowledge Distillation: A Unified Approach with Normalized Loss and Customized Soft Labels
Zhendong Yang
Ailing Zeng
Zhe Li
Tianke Zhang
Chun Yuan
Yu Li
107
80
0
23 Mar 2023
Semantic Scene Completion with Cleaner Self
Fengyun Wang
Dong Zhang
Hanwang Zhang
Jinhui Tang
Qianru Sun
72
12
0
17 Mar 2023
Towards a Smaller Student: Capacity Dynamic Distillation for Efficient Image Retrieval
Yi Xie
Huaidong Zhang
Xuemiao Xu
Jianqing Zhu
Shengfeng He
VLM
58
14
0
16 Mar 2023
Knowledge Distillation from Single to Multi Labels: an Empirical Study
Youcai Zhang
Yuzhuo Qin
Heng-Ye Liu
Yanhao Zhang
Yaqian Li
X. Gu
VLM
80
2
0
15 Mar 2023
MetaMixer: A Regularization Strategy for Online Knowledge Distillation
Maorong Wang
L. Xiao
T. Yamasaki
KELM
MoE
41
1
0
14 Mar 2023
hierarchical network with decoupled knowledge distillation for speech emotion recognition
Ziping Zhao
Haiquan Wang
Haishuai Wang
Bjorn Schuller
44
6
0
09 Mar 2023
Learn More for Food Recognition via Progressive Self-Distillation
Yaohui Zhu
Linhu Liu
Jiang Tian
65
5
0
09 Mar 2023
Distilling Calibrated Student from an Uncalibrated Teacher
Ishan Mishra
Sethu Vamsi Krishna
Deepak Mishra
FedML
55
2
0
22 Feb 2023
Fuzzy Knowledge Distillation from High-Order TSK to Low-Order TSK
Xiongtao Zhang
Zezong Yin
Yunliang Jiang
Yizhang Jiang
Da-Song Sun
Yong-Jin Liu
63
1
0
16 Feb 2023
Jaccard Metric Losses: Optimizing the Jaccard Index with Soft Labels
Zifu Wang
Xuefei Ning
Matthew B. Blaschko
VLM
114
15
0
11 Feb 2023
EVC: Towards Real-Time Neural Image Compression with Mask Decay
G. Wang
Jiahao Li
Bin Li
Yan Lu
80
73
0
10 Feb 2023
Guided Hybrid Quantization for Object detection in Multimodal Remote Sensing Imagery via One-to-one Self-teaching
Jiaqing Zhang
Jie Lei
Weiying Xie
Yunsong Li
Wenxuan Wang
MQ
89
23
0
31 Dec 2022
Discriminator-Cooperated Feature Map Distillation for GAN Compression
Tie Hu
Mingbao Lin
Lizhou You
Chia-Wen Lin
Rongrong Ji
79
9
0
29 Dec 2022
BD-KD: Balancing the Divergences for Online Knowledge Distillation
Ibtihel Amara
N. Sepahvand
B. Meyer
W. Gross
J. Clark
74
2
0
25 Dec 2022
Exploring Content Relationships for Distilling Efficient GANs
Lizhou You
Mingbao Lin
Tie Hu
Chia-Wen Lin
Rongrong Ji
75
4
0
21 Dec 2022
Curriculum Temperature for Knowledge Distillation
Zheng Li
Xiang Li
Lingfeng Yang
Borui Zhao
Renjie Song
Lei Luo
Jun Yu Li
Jian Yang
81
146
0
29 Nov 2022
Class-aware Information for Logit-based Knowledge Distillation
Shuoxi Zhang
Hanpeng Liu
John E. Hopcroft
Kun He
43
2
0
27 Nov 2022
D
3
^3
3
ETR: Decoder Distillation for Detection Transformer
Xiaokang Chen
Jiahui Chen
Yang Liu
Gang Zeng
78
16
0
17 Nov 2022
KD-DETR: Knowledge Distillation for Detection Transformer with Consistent Distillation Points Sampling
Yu Wang
Xin Li
Shengzhao Wen
Fu-En Yang
Wanping Zhang
Gang Zhang
Haocheng Feng
Junyu Han
106
5
0
15 Nov 2022
Long-Range Zero-Shot Generative Deep Network Quantization
Yan Luo
Yangcheng Gao
Zhao Zhang
Haijun Zhang
Mingliang Xu
Meng Wang
MQ
90
10
0
13 Nov 2022
Completely Heterogeneous Federated Learning
Chang-Shu Liu
Yuwen Yang
Xun Cai
Yue Ding
Hongtao Lu
FedML
63
8
0
28 Oct 2022
Online Cross-Layer Knowledge Distillation on Graph Neural Networks with Deep Supervision
Jiongyu Guo
Defang Chen
Can Wang
54
3
0
25 Oct 2022
SA-MLP: Distilling Graph Knowledge from GNNs into Structure-Aware MLP
Jie Chen
Shouzhen Chen
Mingyuan Bai
Junbin Gao
Junping Zhang
Jian Pu
87
10
0
18 Oct 2022
Meta-Ensemble Parameter Learning
Zhengcong Fei
Shuman Tian
Junshi Huang
Xiaoming Wei
Xiaolin K. Wei
OOD
114
2
0
05 Oct 2022
Attention Distillation: self-supervised vision transformer students need more guidance
Kai Wang
Fei Yang
Joost van de Weijer
ViT
57
18
0
03 Oct 2022
Slimmable Networks for Contrastive Self-supervised Learning
Shuai Zhao
Xiaohan Wang
Linchao Zhu
Yi Yang
57
1
0
30 Sep 2022
ViTKD: Practical Guidelines for ViT feature knowledge distillation
Zhendong Yang
Zhe Li
Ailing Zeng
Zexian Li
Chun Yuan
Yu Li
142
42
0
06 Sep 2022
Dynamic Data-Free Knowledge Distillation by Easy-to-Hard Learning Strategy
Jingru Li
Sheng Zhou
Liangcheng Li
Haishuai Wang
Zhi Yu
Jiajun Bu
81
14
0
29 Aug 2022
Disentangle and Remerge: Interventional Knowledge Distillation for Few-Shot Object Detection from A Conditional Causal Perspective
Jiangmeng Li
Yanan Zhang
Jingyao Wang
Hui Xiong
Chengbo Jiao
Xiaohui Hu
Changwen Zheng
Gang Hua
CML
110
30
0
26 Aug 2022
Previous
1
2
3
4
5
6
Next