ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2104.09044
  4. Cited By
Distilling Knowledge via Knowledge Review

Distilling Knowledge via Knowledge Review

19 April 2021
Pengguang Chen
Shu Liu
Hengshuang Zhao
Jiaya Jia
ArXiv (abs)PDFHTMLGithub (272★)

Papers citing "Distilling Knowledge via Knowledge Review"

50 / 215 papers shown
Title
Cosine Similarity Knowledge Distillation for Individual Class
  Information Transfer
Cosine Similarity Knowledge Distillation for Individual Class Information Transfer
Gyeongdo Ham
Seonghak Kim
Suin Lee
Jae-Hyeok Lee
Daeshik Kim
55
6
0
24 Nov 2023
Maximizing Discrimination Capability of Knowledge Distillation with Energy Function
Maximizing Discrimination Capability of Knowledge Distillation with Energy Function
Seonghak Kim
Gyeongdo Ham
Suin Lee
Donggon Jang
Daeshik Kim
229
4
0
24 Nov 2023
Robustness-Reinforced Knowledge Distillation with Correlation Distance and Network Pruning
Robustness-Reinforced Knowledge Distillation with Correlation Distance and Network Pruning
Seonghak Kim
Gyeongdo Ham
Yucheol Cho
Daeshik Kim
114
4
0
23 Nov 2023
DONUT-hole: DONUT Sparsification by Harnessing Knowledge and Optimizing
  Learning Efficiency
DONUT-hole: DONUT Sparsification by Harnessing Knowledge and Optimizing Learning Efficiency
Azhar Shaikh
Michael Cochez
Denis Diachkov
Michiel de Rijcke
Sahar Yousefi
68
0
0
09 Nov 2023
One-for-All: Bridge the Gap Between Heterogeneous Architectures in
  Knowledge Distillation
One-for-All: Bridge the Gap Between Heterogeneous Architectures in Knowledge Distillation
Zhiwei Hao
Jianyuan Guo
Kai Han
Yehui Tang
Han Hu
Yunhe Wang
Chang Xu
105
72
0
30 Oct 2023
torchdistill Meets Hugging Face Libraries for Reproducible, Coding-Free
  Deep Learning Studies: A Case Study on NLP
torchdistill Meets Hugging Face Libraries for Reproducible, Coding-Free Deep Learning Studies: A Case Study on NLP
Yoshitomo Matsubara
VLM
74
1
0
26 Oct 2023
Understanding the Effects of Projectors in Knowledge Distillation
Understanding the Effects of Projectors in Knowledge Distillation
Yudong Chen
Sen Wang
Jiajun Liu
Xuwei Xu
Frank de Hoog
Brano Kusy
Zi Huang
94
0
0
26 Oct 2023
Leveraging Vision-Language Models for Improving Domain Generalization in
  Image Classification
Leveraging Vision-Language Models for Improving Domain Generalization in Image Classification
Sravanti Addepalli
Ashish Ramayee Asokan
Lakshay Sharma
R. V. Babu
VLM
59
21
0
12 Oct 2023
Bidirectional Knowledge Reconfiguration for Lightweight Point Cloud
  Analysis
Bidirectional Knowledge Reconfiguration for Lightweight Point Cloud Analysis
Peipei Li
Xing Cui
Yibo Hu
Man Zhang
Ting Yao
Tao Mei
83
0
0
08 Oct 2023
LumiNet: The Bright Side of Perceptual Knowledge Distillation
LumiNet: The Bright Side of Perceptual Knowledge Distillation
Md. Ismail Hossain
M. M. L. Elahi
Sameera Ramasinghe
A. Cheraghian
Fuad Rahman
Nabeel Mohammed
Shafin Rahman
67
1
0
05 Oct 2023
Improving Knowledge Distillation with Teacher's Explanation
Improving Knowledge Distillation with Teacher's Explanation
S. Chowdhury
Ben Liang
A. Tizghadam
Ilijc Albanese
FAtt
28
0
0
04 Oct 2023
ADU-Depth: Attention-based Distillation with Uncertainty Modeling for
  Depth Estimation
ADU-Depth: Attention-based Distillation with Uncertainty Modeling for Depth Estimation
Zizhang Wu
Zhuozheng Li
Zhi-Gang Fan
Yunzhe Wu
Xiaoquan Wang
Rui Tang
Jian Pu
74
1
0
26 Sep 2023
On Model Explanations with Transferable Neural Pathways
On Model Explanations with Transferable Neural Pathways
Xinmiao Lin
Wentao Bao
Qi Yu
Yu Kong
37
0
0
18 Sep 2023
Two-Step Knowledge Distillation for Tiny Speech Enhancement
Two-Step Knowledge Distillation for Tiny Speech Enhancement
Rayan Daod Nathoo
M. Kegler
Marko Stamenovic
59
6
0
15 Sep 2023
Multimodal Fish Feeding Intensity Assessment in Aquaculture
Multimodal Fish Feeding Intensity Assessment in Aquaculture
Meng Cui
Xubo Liu
Haohe Liu
Zhuangzhuang Du
Tao Chen
Guoping Lian
Daoliang Li
Wenwu Wang
79
5
0
10 Sep 2023
Knowledge Distillation Layer that Lets the Student Decide
Knowledge Distillation Layer that Lets the Student Decide
Ada Gorgun
Y. Z. Gürbüz
A. Aydin Alatan
60
0
0
06 Sep 2023
DMKD: Improving Feature-based Knowledge Distillation for Object
  Detection Via Dual Masking Augmentation
DMKD: Improving Feature-based Knowledge Distillation for Object Detection Via Dual Masking Augmentation
Guangqi Yang
Yin Tang
Zhijian Wu
Jun Yu Li
Jianhua Xu
Xili Wan
63
4
0
06 Sep 2023
SpikeBERT: A Language Spikformer Learned from BERT with Knowledge
  Distillation
SpikeBERT: A Language Spikformer Learned from BERT with Knowledge Distillation
Changze Lv
Changze Lv
Jianhan Xu
Chenxi Gu
Zixuan Ling
Cenyuan Zhang
Xiaoqing Zheng
Xuanjing Huang
71
8
0
29 Aug 2023
SynthDistill: Face Recognition with Knowledge Distillation from
  Synthetic Data
SynthDistill: Face Recognition with Knowledge Distillation from Synthetic Data
Hatef Otroshi
Anjith George
S´ebastien Marcel
73
11
0
28 Aug 2023
Story Visualization by Online Text Augmentation with Context Memory
Story Visualization by Online Text Augmentation with Context Memory
Daechul Ahn
Daneul Kim
Gwangmo Song
Seung Wook Kim
Honglak Lee
Dongyeop Kang
Jonghyun Choi
DiffM
56
5
0
15 Aug 2023
Multi-Label Knowledge Distillation
Multi-Label Knowledge Distillation
Penghui Yang
Ming-Kun Xie
Chen-Chen Zong
Lei Feng
Gang Niu
Masashi Sugiyama
Sheng-Jun Huang
82
10
0
12 Aug 2023
Data-Free Model Extraction Attacks in the Context of Object Detection
Data-Free Model Extraction Attacks in the Context of Object Detection
Harshit Shah
G. Aravindhan
Pavan Kulkarni
Yuvaraj Govidarajulu
Manojkumar Somabhai Parmar
MIACVAAML
74
4
0
09 Aug 2023
AICSD: Adaptive Inter-Class Similarity Distillation for Semantic
  Segmentation
AICSD: Adaptive Inter-Class Similarity Distillation for Semantic Segmentation
Amir M. Mansourian
Rozhan Ahmadi
S. Kasaei
123
2
0
08 Aug 2023
NormKD: Normalized Logits for Knowledge Distillation
NormKD: Normalized Logits for Knowledge Distillation
Zhihao Chi
Tu Zheng
Hengjia Li
Zheng Yang
Boxi Wu
Binbin Lin
D. Cai
82
14
0
01 Aug 2023
Distribution Shift Matters for Knowledge Distillation with Webly
  Collected Images
Distribution Shift Matters for Knowledge Distillation with Webly Collected Images
Jialiang Tang
Shuo Chen
Gang Niu
Masashi Sugiyama
Chenggui Gong
74
14
0
21 Jul 2023
DreamTeacher: Pretraining Image Backbones with Deep Generative Models
DreamTeacher: Pretraining Image Backbones with Deep Generative Models
Daiqing Li
Huan Ling
Amlan Kar
David Acuna
Seung Wook Kim
Karsten Kreis
Antonio Torralba
Sanja Fidler
VLMDiffM
77
29
0
14 Jul 2023
Distilling Large Vision-Language Model with Out-of-Distribution
  Generalizability
Distilling Large Vision-Language Model with Out-of-Distribution Generalizability
Xuanlin Li
Yunhao Fang
Minghua Liu
Z. Ling
Zhuowen Tu
Haoran Su
VLM
95
25
0
06 Jul 2023
Efficient Visual Fault Detection for Freight Train Braking System via
  Heterogeneous Self Distillation in the Wild
Efficient Visual Fault Detection for Freight Train Braking System via Heterogeneous Self Distillation in the Wild
Yang Zhang
Huilin Pan
Yang Zhou
Mingying Li
Guo-dong Sun
71
8
0
03 Jul 2023
Review helps learn better: Temporal Supervised Knowledge Distillation
Review helps learn better: Temporal Supervised Knowledge Distillation
Dongwei Wang
Zhi Han
Yanmei Wang
Xi’ai Chen
Baichen Liu
Yandong Tang
144
1
0
03 Jul 2023
Subclass-balancing Contrastive Learning for Long-tailed Recognition
Subclass-balancing Contrastive Learning for Long-tailed Recognition
Chengkai Hou
Jieyu Zhang
Hong Wang
Dinesh Manocha
78
24
0
28 Jun 2023
Deep Transfer Learning for Intelligent Vehicle Perception: a Survey
Deep Transfer Learning for Intelligent Vehicle Perception: a Survey
Xinyi Liu
Jinlong Li
Jin Ma
Huiming Sun
Zhigang Xu
Tianyu Zhang
Hongkai Yu
137
28
0
26 Jun 2023
Accelerating Molecular Graph Neural Networks via Knowledge Distillation
Accelerating Molecular Graph Neural Networks via Knowledge Distillation
Filip Ekstrom Kelvinius
D. Georgiev
Artur Toshev
Johannes Gasteiger
105
9
0
26 Jun 2023
Categories of Response-Based, Feature-Based, and Relation-Based
  Knowledge Distillation
Categories of Response-Based, Feature-Based, and Relation-Based Knowledge Distillation
Chuanguang Yang
Xinqiang Yu
Zhulin An
Yongjun Xu
VLMOffRL
184
26
0
19 Jun 2023
Are Large Kernels Better Teachers than Transformers for ConvNets?
Are Large Kernels Better Teachers than Transformers for ConvNets?
Tianjin Huang
Lu Yin
Zhenyu Zhang
Lijuan Shen
Meng Fang
Mykola Pechenizkiy
Zhangyang Wang
Shiwei Liu
90
13
0
30 May 2023
Improving Knowledge Distillation via Regularizing Feature Norm and
  Direction
Improving Knowledge Distillation via Regularizing Feature Norm and Direction
Yuzhu Wang
Lechao Cheng
Manni Duan
Yongheng Wang
Zunlei Feng
Shu Kong
95
22
0
26 May 2023
Triplet Knowledge Distillation
Triplet Knowledge Distillation
Xijun Wang
Dongyang Liu
Meina Kan
Chunrui Han
Zhongqin Wu
Shiguang Shan
68
3
0
25 May 2023
VanillaKD: Revisit the Power of Vanilla Knowledge Distillation from
  Small Scale to Large Scale
VanillaKD: Revisit the Power of Vanilla Knowledge Distillation from Small Scale to Large Scale
Zhiwei Hao
Jianyuan Guo
Kai Han
Han Hu
Chang Xu
Yunhe Wang
70
16
0
25 May 2023
Knowledge Diffusion for Distillation
Knowledge Diffusion for Distillation
Tao Huang
Yuan Zhang
Mingkai Zheng
Shan You
Fei Wang
Chao Qian
Chang Xu
108
56
0
25 May 2023
Deakin RF-Sensing: Experiments on Correlated Knowledge Distillation for
  Monitoring Human Postures with Radios
Deakin RF-Sensing: Experiments on Correlated Knowledge Distillation for Monitoring Human Postures with Radios
Shiva Raj Pokhrel
Jonathan Kua
Deol Satish
Phil Williams
A. Zaslavsky
S. W. Loke
Jinho Choi
87
5
0
24 May 2023
Decoupled Kullback-Leibler Divergence Loss
Decoupled Kullback-Leibler Divergence Loss
Jiequan Cui
Zhuotao Tian
Zhisheng Zhong
Xiaojuan Qi
Bei Yu
Hanwang Zhang
78
45
0
23 May 2023
NORM: Knowledge Distillation via N-to-One Representation Matching
NORM: Knowledge Distillation via N-to-One Representation Matching
Xiaolong Liu
Lujun Li
Chao Li
Anbang Yao
113
71
0
23 May 2023
Is Synthetic Data From Diffusion Models Ready for Knowledge
  Distillation?
Is Synthetic Data From Diffusion Models Ready for Knowledge Distillation?
Zheng Li
Yuxuan Li
Penghai Zhao
Renjie Song
Xiang Li
Jian Yang
92
20
0
22 May 2023
Student-friendly Knowledge Distillation
Student-friendly Knowledge Distillation
Mengyang Yuan
Bo Lang
Fengnan Quan
92
21
0
18 May 2023
Visual Tuning
Visual Tuning
Bruce X. B. Yu
Jianlong Chang
Haixin Wang
Lin Liu
Shijie Wang
...
Lingxi Xie
Haojie Li
Zhouchen Lin
Qi Tian
Chang Wen Chen
VLM
171
41
0
10 May 2023
DynamicKD: An Effective Knowledge Distillation via Dynamic Entropy
  Correction-Based Distillation for Gap Optimizing
DynamicKD: An Effective Knowledge Distillation via Dynamic Entropy Correction-Based Distillation for Gap Optimizing
Songling Zhu
Ronghua Shang
Bo Yuan
Weitong Zhang
Yangyang Li
Licheng Jiao
58
7
0
09 May 2023
Do Not Blindly Imitate the Teacher: Using Perturbed Loss for Knowledge
  Distillation
Do Not Blindly Imitate the Teacher: Using Perturbed Loss for Knowledge Distillation
Rongzhi Zhang
Jiaming Shen
Tianqi Liu
Jia-Ling Liu
Michael Bendersky
Marc Najork
Chao Zhang
104
20
0
08 May 2023
Class Attention Transfer Based Knowledge Distillation
Class Attention Transfer Based Knowledge Distillation
Ziyao Guo
Haonan Yan
Hui Li
Xiao-La Lin
62
69
0
25 Apr 2023
Improving Knowledge Distillation via Transferring Learning Ability
Improving Knowledge Distillation via Transferring Learning Ability
Long Liu
Tong Li
Hui Cheng
15
1
0
24 Apr 2023
Function-Consistent Feature Distillation
Function-Consistent Feature Distillation
Dongyang Liu
Meina Kan
Shiguang Shan
Xilin Chen
110
19
0
24 Apr 2023
Knowledge Distillation Under Ideal Joint Classifier Assumption
Knowledge Distillation Under Ideal Joint Classifier Assumption
Huayu Li
Xiwen Chen
G. Ditzler
Janet Roveda
Ao Li
42
1
0
19 Apr 2023
Previous
12345
Next