ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2104.09044
  4. Cited By
Distilling Knowledge via Knowledge Review

Distilling Knowledge via Knowledge Review

19 April 2021
Pengguang Chen
Shu-Lin Liu
Hengshuang Zhao
Jiaya Jia
ArXivPDFHTML

Papers citing "Distilling Knowledge via Knowledge Review"

50 / 74 papers shown
Title
Head-Tail-Aware KL Divergence in Knowledge Distillation for Spiking Neural Networks
Head-Tail-Aware KL Divergence in Knowledge Distillation for Spiking Neural Networks
Tianqing Zhang
Zixin Zhu
Kairong Yu
Hongwei Wang
151
0
0
29 Apr 2025
Swapped Logit Distillation via Bi-level Teacher Alignment
Swapped Logit Distillation via Bi-level Teacher Alignment
Stephen Ekaputra Limantoro
Jhe-Hao Lin
Chih-Yu Wang
Yi-Lung Tsai
Hong-Han Shuai
Ching-Chun Huang
Wen-Huang Cheng
54
0
0
27 Apr 2025
HDC: Hierarchical Distillation for Multi-level Noisy Consistency in Semi-Supervised Fetal Ultrasound Segmentation
HDC: Hierarchical Distillation for Multi-level Noisy Consistency in Semi-Supervised Fetal Ultrasound Segmentation
Tran Quoc Khanh Le
Nguyen Lan Vi Vu
Ha-Hieu Pham
Xuan-Loc Huynh
T. Nguyen
Minh Huu Nhat Le
Quan Nguyen
Hien Nguyen
46
0
0
14 Apr 2025
Distilling Stereo Networks for Performant and Efficient Leaner Networks
Distilling Stereo Networks for Performant and Efficient Leaner Networks
Rafia Rahim
Samuel Woerz
A. Zell
77
0
0
24 Mar 2025
VRM: Knowledge Distillation via Virtual Relation Matching
VRM: Knowledge Distillation via Virtual Relation Matching
W. Zhang
Fei Xie
Weidong Cai
Chao Ma
76
0
0
28 Feb 2025
Contrastive Representation Distillation via Multi-Scale Feature Decoupling
Contrastive Representation Distillation via Multi-Scale Feature Decoupling
Cuipeng Wang
Tieyuan Chen
Haipeng Wang
54
0
0
09 Feb 2025
Rethinking Knowledge in Distillation: An In-context Sample Retrieval Perspective
Rethinking Knowledge in Distillation: An In-context Sample Retrieval Perspective
Jinjing Zhu
Songze Li
Lin Wang
47
0
0
13 Jan 2025
Harmonizing knowledge Transfer in Neural Network with Unified
  Distillation
Harmonizing knowledge Transfer in Neural Network with Unified Distillation
Yaomin Huang
Zaomin Yan
Chaomin Shen
Faming Fang
Guixu Zhang
34
0
0
27 Sep 2024
Deep Companion Learning: Enhancing Generalization Through Historical
  Consistency
Deep Companion Learning: Enhancing Generalization Through Historical Consistency
Ruizhao Zhu
Venkatesh Saligrama
FedML
34
0
0
26 Jul 2024
Continual Distillation Learning: Knowledge Distillation in Prompt-based Continual Learning
Continual Distillation Learning: Knowledge Distillation in Prompt-based Continual Learning
Qifan Zhang
Yunhui Guo
Yu Xiang
VLM
CLL
56
0
0
18 Jul 2024
Relational Representation Distillation
Relational Representation Distillation
Nikolaos Giakoumoglou
Tania Stathaki
37
0
0
16 Jul 2024
DistilDoc: Knowledge Distillation for Visually-Rich Document Applications
DistilDoc: Knowledge Distillation for Visually-Rich Document Applications
Jordy Van Landeghem
Subhajit Maity
Ayan Banerjee
Matthew Blaschko
Marie-Francine Moens
Josep Lladós
Sanket Biswas
50
2
0
12 Jun 2024
ReDistill: Residual Encoded Distillation for Peak Memory Reduction of CNNs
ReDistill: Residual Encoded Distillation for Peak Memory Reduction of CNNs
Fang Chen
Gourav Datta
Mujahid Al Rafi
Hyeran Jeon
Meng Tang
93
1
0
06 Jun 2024
CKD: Contrastive Knowledge Distillation from A Sample-wise Perspective
CKD: Contrastive Knowledge Distillation from A Sample-wise Perspective
Wencheng Zhu
Xin Zhou
Pengfei Zhu
Yu Wang
Qinghua Hu
VLM
64
1
0
22 Apr 2024
MTKD: Multi-Teacher Knowledge Distillation for Image Super-Resolution
MTKD: Multi-Teacher Knowledge Distillation for Image Super-Resolution
Yuxuan Jiang
Chen Feng
Fan Zhang
David Bull
SupR
51
11
0
15 Apr 2024
On the Surprising Efficacy of Distillation as an Alternative to
  Pre-Training Small Models
On the Surprising Efficacy of Distillation as an Alternative to Pre-Training Small Models
Sean Farhat
Deming Chen
42
0
0
04 Apr 2024
Learning to Project for Cross-Task Knowledge Distillation
Learning to Project for Cross-Task Knowledge Distillation
Dylan Auty
Roy Miles
Benedikt Kolbeinsson
K. Mikolajczyk
40
0
0
21 Mar 2024
Distill2Explain: Differentiable decision trees for explainable
  reinforcement learning in energy application controllers
Distill2Explain: Differentiable decision trees for explainable reinforcement learning in energy application controllers
Gargya Gokhale
Seyed soroush Karimi madahi
Bert Claessens
Chris Develder
OffRL
25
2
0
18 Mar 2024
LIX: Implicitly Infusing Spatial Geometric Prior Knowledge into Visual Semantic Segmentation for Autonomous Driving
LIX: Implicitly Infusing Spatial Geometric Prior Knowledge into Visual Semantic Segmentation for Autonomous Driving
Sicen Guo
Zhiyuan Wu
Qijun Chen
Ioannis Pitas
Rui Fan
Rui Fan
37
1
0
13 Mar 2024
Attention-guided Feature Distillation for Semantic Segmentation
Attention-guided Feature Distillation for Semantic Segmentation
Amir M. Mansourian
Arya Jalali
Rozhan Ahmadi
S. Kasaei
30
0
0
08 Mar 2024
Learning to Maximize Mutual Information for Chain-of-Thought
  Distillation
Learning to Maximize Mutual Information for Chain-of-Thought Distillation
Xin Chen
Hanxian Huang
Yanjun Gao
Yi Wang
Jishen Zhao
Ke Ding
35
11
0
05 Mar 2024
GraphKD: Exploring Knowledge Distillation Towards Document Object
  Detection with Structured Graph Creation
GraphKD: Exploring Knowledge Distillation Towards Document Object Detection with Structured Graph Creation
Ayan Banerjee
Sanket Biswas
Josep Lladós
Umapada Pal
40
1
0
17 Feb 2024
Revisiting Knowledge Distillation under Distribution Shift
Revisiting Knowledge Distillation under Distribution Shift
Songming Zhang
Ziyu Lyu
Xiaofeng Chen
29
1
0
25 Dec 2023
Robustness-Reinforced Knowledge Distillation with Correlation Distance and Network Pruning
Robustness-Reinforced Knowledge Distillation with Correlation Distance and Network Pruning
Seonghak Kim
Gyeongdo Ham
Yucheol Cho
Daeshik Kim
27
2
0
23 Nov 2023
DONUT-hole: DONUT Sparsification by Harnessing Knowledge and Optimizing
  Learning Efficiency
DONUT-hole: DONUT Sparsification by Harnessing Knowledge and Optimizing Learning Efficiency
Azhar Shaikh
Michael Cochez
Denis Diachkov
Michiel de Rijcke
Sahar Yousefi
25
0
0
09 Nov 2023
torchdistill Meets Hugging Face Libraries for Reproducible, Coding-Free
  Deep Learning Studies: A Case Study on NLP
torchdistill Meets Hugging Face Libraries for Reproducible, Coding-Free Deep Learning Studies: A Case Study on NLP
Yoshitomo Matsubara
VLM
26
1
0
26 Oct 2023
Understanding the Effects of Projectors in Knowledge Distillation
Understanding the Effects of Projectors in Knowledge Distillation
Yudong Chen
Sen Wang
Jiajun Liu
Xuwei Xu
Frank de Hoog
Brano Kusy
Zi Huang
26
0
0
26 Oct 2023
Bidirectional Knowledge Reconfiguration for Lightweight Point Cloud
  Analysis
Bidirectional Knowledge Reconfiguration for Lightweight Point Cloud Analysis
Peipei Li
Xing Cui
Yibo Hu
Man Zhang
Ting Yao
Tao Mei
25
0
0
08 Oct 2023
ADU-Depth: Attention-based Distillation with Uncertainty Modeling for
  Depth Estimation
ADU-Depth: Attention-based Distillation with Uncertainty Modeling for Depth Estimation
Zizhang Wu
Zhuozheng Li
Zhi-Gang Fan
Yunzhe Wu
Xiaoquan Wang
Rui Tang
Jian Pu
23
1
0
26 Sep 2023
Multi-Label Knowledge Distillation
Multi-Label Knowledge Distillation
Penghui Yang
Ming-Kun Xie
Chen-Chen Zong
Lei Feng
Gang Niu
Masashi Sugiyama
Sheng-Jun Huang
36
10
0
12 Aug 2023
Review helps learn better: Temporal Supervised Knowledge Distillation
Review helps learn better: Temporal Supervised Knowledge Distillation
Dongwei Wang
Zhi Han
Yanmei Wang
Xi’ai Chen
Baichen Liu
Yandong Tang
60
1
0
03 Jul 2023
Improving Knowledge Distillation via Regularizing Feature Norm and
  Direction
Improving Knowledge Distillation via Regularizing Feature Norm and Direction
Yuzhu Wang
Lechao Cheng
Manni Duan
Yongheng Wang
Zunlei Feng
Shu Kong
31
19
0
26 May 2023
Knowledge Diffusion for Distillation
Knowledge Diffusion for Distillation
Tao Huang
Yuan Zhang
Mingkai Zheng
Shan You
Fei Wang
Chao Qian
Chang Xu
37
50
0
25 May 2023
Decoupled Kullback-Leibler Divergence Loss
Decoupled Kullback-Leibler Divergence Loss
Jiequan Cui
Zhuotao Tian
Zhisheng Zhong
Xiaojuan Qi
Bei Yu
Hanwang Zhang
39
38
0
23 May 2023
Is Synthetic Data From Diffusion Models Ready for Knowledge
  Distillation?
Is Synthetic Data From Diffusion Models Ready for Knowledge Distillation?
Zheng Li
Yuxuan Li
Penghai Zhao
Renjie Song
Xiang Li
Jian Yang
34
19
0
22 May 2023
Student-friendly Knowledge Distillation
Student-friendly Knowledge Distillation
Mengyang Yuan
Bo Lang
Fengnan Quan
20
17
0
18 May 2023
Do Not Blindly Imitate the Teacher: Using Perturbed Loss for Knowledge
  Distillation
Do Not Blindly Imitate the Teacher: Using Perturbed Loss for Knowledge Distillation
Rongzhi Zhang
Jiaming Shen
Tianqi Liu
Jia-Ling Liu
Michael Bendersky
Marc Najork
Chao Zhang
48
18
0
08 May 2023
Function-Consistent Feature Distillation
Function-Consistent Feature Distillation
Dongyang Liu
Meina Kan
Shiguang Shan
Xilin Chen
44
18
0
24 Apr 2023
Knowledge Distillation Under Ideal Joint Classifier Assumption
Knowledge Distillation Under Ideal Joint Classifier Assumption
Huayu Li
Xiwen Chen
G. Ditzler
Janet Roveda
Ao Li
18
1
0
19 Apr 2023
From Knowledge Distillation to Self-Knowledge Distillation: A Unified
  Approach with Normalized Loss and Customized Soft Labels
From Knowledge Distillation to Self-Knowledge Distillation: A Unified Approach with Normalized Loss and Customized Soft Labels
Zhendong Yang
Ailing Zeng
Zhe Li
Tianke Zhang
Chun Yuan
Yu Li
29
72
0
23 Mar 2023
Detecting the open-world objects with the help of the Brain
Detecting the open-world objects with the help of the Brain
Shuailei Ma
Yuefeng Wang
Ying-yu Wei
Peihao Chen
Zhixiang Ye
Jiaqi Fan
Enming Zhang
Thomas H. Li
VLM
ObjD
24
2
0
21 Mar 2023
Knowledge Distillation from Single to Multi Labels: an Empirical Study
Knowledge Distillation from Single to Multi Labels: an Empirical Study
Youcai Zhang
Yuzhuo Qin
Heng-Ye Liu
Yanhao Zhang
Yaqian Li
X. Gu
VLM
53
2
0
15 Mar 2023
Generic-to-Specific Distillation of Masked Autoencoders
Generic-to-Specific Distillation of Masked Autoencoders
Wei Huang
Zhiliang Peng
Li Dong
Furu Wei
Jianbin Jiao
QiXiang Ye
32
22
0
28 Feb 2023
Take a Prior from Other Tasks for Severe Blur Removal
Take a Prior from Other Tasks for Severe Blur Removal
Pei Wang
Danna Xue
Yu Zhu
Jinqiu Sun
Qingsen Yan
Sung-eui Yoon
Yanning Zhang
26
1
0
14 Feb 2023
Dataset Distillation: A Comprehensive Review
Dataset Distillation: A Comprehensive Review
Ruonan Yu
Songhua Liu
Xinchao Wang
DD
53
121
0
17 Jan 2023
Guided Hybrid Quantization for Object detection in Multimodal Remote
  Sensing Imagery via One-to-one Self-teaching
Guided Hybrid Quantization for Object detection in Multimodal Remote Sensing Imagery via One-to-one Self-teaching
Jiaqing Zhang
Jie Lei
Weiying Xie
Yunsong Li
Wenxuan Wang
MQ
27
18
0
31 Dec 2022
Exploring Content Relationships for Distilling Efficient GANs
Exploring Content Relationships for Distilling Efficient GANs
Lizhou You
Mingbao Lin
Tie Hu
Rongrong Ji
Rongrong Ji
41
3
0
21 Dec 2022
Hint-dynamic Knowledge Distillation
Hint-dynamic Knowledge Distillation
Yiyang Liu
Chenxin Li
Xiaotong Tu
Xinghao Ding
Yue Huang
14
1
0
30 Nov 2022
Curriculum Temperature for Knowledge Distillation
Curriculum Temperature for Knowledge Distillation
Zheng Li
Xiang Li
Lingfeng Yang
Borui Zhao
Renjie Song
Lei Luo
Jun Yu Li
Jian Yang
27
132
0
29 Nov 2022
Rethinking Implicit Neural Representations for Vision Learners
Rethinking Implicit Neural Representations for Vision Learners
Yiran Song
Qianyu Zhou
Lizhuang Ma
16
7
0
22 Nov 2022
12
Next