ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2203.14001
  4. Cited By
Knowledge Distillation with the Reused Teacher Classifier

Knowledge Distillation with the Reused Teacher Classifier

26 March 2022
Defang Chen
Jianhan Mei
Hailin Zhang
C. Wang
Yan Feng
Chun-Yen Chen
ArXivPDFHTML

Papers citing "Knowledge Distillation with the Reused Teacher Classifier"

47 / 97 papers shown
Title
Robustness-Reinforced Knowledge Distillation with Correlation Distance and Network Pruning
Robustness-Reinforced Knowledge Distillation with Correlation Distance and Network Pruning
Seonghak Kim
Gyeongdo Ham
Yucheol Cho
Daeshik Kim
30
2
0
23 Nov 2023
Using Early Readouts to Mediate Featural Bias in Distillation
Using Early Readouts to Mediate Featural Bias in Distillation
Rishabh Tiwari
D. Sivasubramanian
Anmol Reddy Mekala
Ganesh Ramakrishnan
Pradeep Shenoy
24
5
0
28 Oct 2023
Towards Anytime Fine-tuning: Continually Pre-trained Language Models
  with Hypernetwork Prompt
Towards Anytime Fine-tuning: Continually Pre-trained Language Models with Hypernetwork Prompt
Gangwei Jiang
Caigao Jiang
Siqiao Xue
James Y. Zhang
Junqing Zhou
Defu Lian
Ying Wei
VLM
32
7
0
19 Oct 2023
Leveraging Vision-Language Models for Improving Domain Generalization in
  Image Classification
Leveraging Vision-Language Models for Improving Domain Generalization in Image Classification
Sravanti Addepalli
Ashish Ramayee Asokan
Lakshay Sharma
R. V. Babu
VLM
24
15
0
12 Oct 2023
Boosting Facial Action Unit Detection Through Jointly Learning Facial
  Landmark Detection and Domain Separation and Reconstruction
Boosting Facial Action Unit Detection Through Jointly Learning Facial Landmark Detection and Domain Separation and Reconstruction
Ziqiao Shang
Li Yu
CVBM
26
0
0
08 Oct 2023
Bidirectional Knowledge Reconfiguration for Lightweight Point Cloud
  Analysis
Bidirectional Knowledge Reconfiguration for Lightweight Point Cloud Analysis
Peipei Li
Xing Cui
Yibo Hu
Man Zhang
Ting Yao
Tao Mei
25
0
0
08 Oct 2023
Heterogeneous Generative Knowledge Distillation with Masked Image
  Modeling
Heterogeneous Generative Knowledge Distillation with Masked Image Modeling
Ziming Wang
Shumin Han
Xiaodi Wang
Jing Hao
Xianbin Cao
Baochang Zhang
VLM
32
0
0
18 Sep 2023
Knowledge Distillation Layer that Lets the Student Decide
Knowledge Distillation Layer that Lets the Student Decide
Ada Gorgun
Y. Z. Gürbüz
Aydin Alatan
23
0
0
06 Sep 2023
MoMA: Momentum Contrastive Learning with Multi-head Attention-based
  Knowledge Distillation for Histopathology Image Analysis
MoMA: Momentum Contrastive Learning with Multi-head Attention-based Knowledge Distillation for Histopathology Image Analysis
T. Vuong
J. T. Kwak
41
6
0
31 Aug 2023
AICSD: Adaptive Inter-Class Similarity Distillation for Semantic
  Segmentation
AICSD: Adaptive Inter-Class Similarity Distillation for Semantic Segmentation
Amir M. Mansourian
Rozhan Ahmadi
S. Kasaei
41
2
0
08 Aug 2023
NormKD: Normalized Logits for Knowledge Distillation
NormKD: Normalized Logits for Knowledge Distillation
Zhihao Chi
Tu Zheng
Hengjia Li
Zheng Yang
Boxi Wu
Binbin Lin
D. Cai
30
13
0
01 Aug 2023
Fundus-Enhanced Disease-Aware Distillation Model for Retinal Disease
  Classification from OCT Images
Fundus-Enhanced Disease-Aware Distillation Model for Retinal Disease Classification from OCT Images
Lehan Wang
Weihang Dai
Mei Jin
Chubin Ou
Xiaomeng Li
27
5
0
01 Aug 2023
Distribution Shift Matters for Knowledge Distillation with Webly
  Collected Images
Distribution Shift Matters for Knowledge Distillation with Webly Collected Images
Jialiang Tang
Shuo Chen
Gang Niu
Masashi Sugiyama
Chenggui Gong
21
13
0
21 Jul 2023
Customizing Synthetic Data for Data-Free Student Learning
Customizing Synthetic Data for Data-Free Student Learning
Shiya Luo
Defang Chen
Can Wang
14
2
0
10 Jul 2023
Review of Large Vision Models and Visual Prompt Engineering
Review of Large Vision Models and Visual Prompt Engineering
Jiaqi Wang
Zheng Liu
Lin Zhao
Zihao Wu
Chong Ma
...
Bao Ge
Yixuan Yuan
Dinggang Shen
Tianming Liu
Shu Zhang
VLM
LRM
55
146
0
03 Jul 2023
Categories of Response-Based, Feature-Based, and Relation-Based
  Knowledge Distillation
Categories of Response-Based, Feature-Based, and Relation-Based Knowledge Distillation
Chuanguang Yang
Xinqiang Yu
Zhulin An
Yongjun Xu
VLM
OffRL
86
22
0
19 Jun 2023
Adaptive Multi-Teacher Knowledge Distillation with Meta-Learning
Adaptive Multi-Teacher Knowledge Distillation with Meta-Learning
Hailin Zhang
Defang Chen
Can Wang
12
12
0
11 Jun 2023
Encoding Time-Series Explanations through Self-Supervised Model Behavior
  Consistency
Encoding Time-Series Explanations through Self-Supervised Model Behavior Consistency
Owen Queen
Thomas Hartvigsen
Teddy Koker
Huan He
Theodoros Tsiligkaridis
Marinka Zitnik
AI4TS
37
17
0
03 Jun 2023
LowDINO -- A Low Parameter Self Supervised Learning Model
LowDINO -- A Low Parameter Self Supervised Learning Model
Sai Krishna Prathapaneni
Shvejan Shashank
K. SrikarReddy
30
0
0
28 May 2023
FoPro-KD: Fourier Prompted Effective Knowledge Distillation for
  Long-Tailed Medical Image Recognition
FoPro-KD: Fourier Prompted Effective Knowledge Distillation for Long-Tailed Medical Image Recognition
Marawan Elbatel
Robert Martí
Xiaomeng Li
AAML
34
10
0
27 May 2023
VanillaKD: Revisit the Power of Vanilla Knowledge Distillation from
  Small Scale to Large Scale
VanillaKD: Revisit the Power of Vanilla Knowledge Distillation from Small Scale to Large Scale
Zhiwei Hao
Jianyuan Guo
Kai Han
Han Hu
Chang Xu
Yunhe Wang
35
16
0
25 May 2023
NORM: Knowledge Distillation via N-to-One Representation Matching
NORM: Knowledge Distillation via N-to-One Representation Matching
Xiaolong Liu
Lujun Li
Chao Li
Anbang Yao
55
68
0
23 May 2023
Adjusting Logit in Gaussian Form for Long-Tailed Visual Recognition
Adjusting Logit in Gaussian Form for Long-Tailed Visual Recognition
Mengke Li
Y. Cheung
Yang Lu
Zhikai Hu
Weichao Lan
Hui Huang
37
5
0
18 May 2023
Robust Saliency-Aware Distillation for Few-shot Fine-grained Visual
  Recognition
Robust Saliency-Aware Distillation for Few-shot Fine-grained Visual Recognition
Haiqi Liu
Cheng Chen
Xinrong Gong
Tong Zhang
32
9
0
12 May 2023
Function-Consistent Feature Distillation
Function-Consistent Feature Distillation
Dongyang Liu
Meina Kan
Shiguang Shan
Xilin Chen
44
18
0
24 Apr 2023
Knowledge Distillation Under Ideal Joint Classifier Assumption
Knowledge Distillation Under Ideal Joint Classifier Assumption
Huayu Li
Xiwen Chen
G. Ditzler
Janet Roveda
Ao Li
18
1
0
19 Apr 2023
Towards Efficient Task-Driven Model Reprogramming with Foundation Models
Towards Efficient Task-Driven Model Reprogramming with Foundation Models
Shoukai Xu
Jiangchao Yao
Ran Luo
Shuhai Zhang
Zihao Lian
Mingkui Tan
Bo Han
Yaowei Wang
24
6
0
05 Apr 2023
Decomposed Cross-modal Distillation for RGB-based Temporal Action
  Detection
Decomposed Cross-modal Distillation for RGB-based Temporal Action Detection
Pilhyeon Lee
Taeoh Kim
Minho Shim
Dongyoon Wee
H. Byun
33
11
0
30 Mar 2023
DisWOT: Student Architecture Search for Distillation WithOut Training
DisWOT: Student Architecture Search for Distillation WithOut Training
Peijie Dong
Lujun Li
Zimian Wei
38
56
0
28 Mar 2023
MV-MR: multi-views and multi-representations for self-supervised
  learning and knowledge distillation
MV-MR: multi-views and multi-representations for self-supervised learning and knowledge distillation
Vitaliy Kinakh
M. Drozdova
Slava Voloshynovskiy
40
1
0
21 Mar 2023
Understanding the Role of the Projector in Knowledge Distillation
Understanding the Role of the Projector in Knowledge Distillation
Roy Miles
K. Mikolajczyk
27
21
0
20 Mar 2023
Feature-Rich Audio Model Inversion for Data-Free Knowledge Distillation
  Towards General Sound Classification
Feature-Rich Audio Model Inversion for Data-Free Knowledge Distillation Towards General Sound Classification
Zuheng Kang
Yayun He
Jianzong Wang
Junqing Peng
Xiaoyang Qu
Jing Xiao
11
2
0
14 Mar 2023
Digital Twin-Assisted Knowledge Distillation Framework for Heterogeneous
  Federated Learning
Digital Twin-Assisted Knowledge Distillation Framework for Heterogeneous Federated Learning
Xiucheng Wang
Nan Cheng
Longfei Ma
Ruijin Sun
Rong Chai
Ning Lu
FedML
38
11
0
10 Mar 2023
Learn More for Food Recognition via Progressive Self-Distillation
Learn More for Food Recognition via Progressive Self-Distillation
Yaohui Zhu
Linhu Liu
Jiang Tian
36
5
0
09 Mar 2023
Debiased Distillation by Transplanting the Last Layer
Debiased Distillation by Transplanting the Last Layer
Jiwoon Lee
Jaeho Lee
23
3
0
22 Feb 2023
Supervision Complexity and its Role in Knowledge Distillation
Supervision Complexity and its Role in Knowledge Distillation
Hrayr Harutyunyan
A. S. Rawat
A. Menon
Seungyeon Kim
Surinder Kumar
24
12
0
28 Jan 2023
EmbedDistill: A Geometric Knowledge Distillation for Information
  Retrieval
EmbedDistill: A Geometric Knowledge Distillation for Information Retrieval
Seungyeon Kim
A. S. Rawat
Manzil Zaheer
Sadeep Jayasumana
Veeranjaneyulu Sadhanala
Wittawat Jitkrittum
A. Menon
Rob Fergus
Surinder Kumar
FedML
44
7
0
27 Jan 2023
Leveraging Different Learning Styles for Improved Knowledge Distillation
  in Biomedical Imaging
Leveraging Different Learning Styles for Improved Knowledge Distillation in Biomedical Imaging
Usma Niyaz
A. Sambyal
Deepti R. Bathula
25
0
0
06 Dec 2022
Class-aware Information for Logit-based Knowledge Distillation
Class-aware Information for Logit-based Knowledge Distillation
Shuoxi Zhang
Hanpeng Liu
J. Hopcroft
Kun He
27
2
0
27 Nov 2022
Accelerating Diffusion Sampling with Classifier-based Feature
  Distillation
Accelerating Diffusion Sampling with Classifier-based Feature Distillation
Wujie Sun
Defang Chen
Can Wang
Deshi Ye
Yan Feng
Chun-Yen Chen
35
16
0
22 Nov 2022
Online Cross-Layer Knowledge Distillation on Graph Neural Networks with
  Deep Supervision
Online Cross-Layer Knowledge Distillation on Graph Neural Networks with Deep Supervision
Jiongyu Guo
Defang Chen
Can Wang
19
3
0
25 Oct 2022
Confidence-aware Self-Semantic Distillation on Knowledge Graph Embedding
Confidence-aware Self-Semantic Distillation on Knowledge Graph Embedding
Yichen Liu
C. Wang
Defang Chen
Zhehui Zhou
Yan Feng
Chun-Yen Chen
19
0
0
07 Jun 2022
Alignahead: Online Cross-Layer Knowledge Extraction on Graph Neural
  Networks
Alignahead: Online Cross-Layer Knowledge Extraction on Graph Neural Networks
Jiongyu Guo
Defang Chen
Can Wang
FedML
21
4
0
05 May 2022
Knowledge Distillation with Deep Supervision
Knowledge Distillation with Deep Supervision
Shiya Luo
Defang Chen
Can Wang
21
1
0
16 Feb 2022
Distilling Knowledge via Knowledge Review
Distilling Knowledge via Knowledge Review
Pengguang Chen
Shu-Lin Liu
Hengshuang Zhao
Jiaya Jia
152
422
0
19 Apr 2021
Distilling and Transferring Knowledge via cGAN-generated Samples for
  Image Classification and Regression
Distilling and Transferring Knowledge via cGAN-generated Samples for Image Classification and Regression
Xin Ding
Z. J. Wang
Zuheng Xu
Z. Jane Wang
William J. Welch
25
22
0
07 Apr 2021
ImageNet Large Scale Visual Recognition Challenge
ImageNet Large Scale Visual Recognition Challenge
Olga Russakovsky
Jia Deng
Hao Su
J. Krause
S. Satheesh
...
A. Karpathy
A. Khosla
Michael S. Bernstein
Alexander C. Berg
Li Fei-Fei
VLM
ObjD
296
39,198
0
01 Sep 2014
Previous
12