ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2203.14001
  4. Cited By
Knowledge Distillation with the Reused Teacher Classifier

Knowledge Distillation with the Reused Teacher Classifier

26 March 2022
Defang Chen
Jianhan Mei
Hailin Zhang
C. Wang
Yan Feng
Chun-Yen Chen
ArXivPDFHTML

Papers citing "Knowledge Distillation with the Reused Teacher Classifier"

50 / 97 papers shown
Title
ABKD: Pursuing a Proper Allocation of the Probability Mass in Knowledge Distillation via $α$-$β$-Divergence
ABKD: Pursuing a Proper Allocation of the Probability Mass in Knowledge Distillation via ααα-βββ-Divergence
Guanghui Wang
Zhiyong Yang
Zhilin Wang
Shi Wang
Qianqian Xu
Q. Huang
42
0
0
07 May 2025
A Dual-Space Framework for General Knowledge Distillation of Large Language Models
A Dual-Space Framework for General Knowledge Distillation of Large Language Models
Xuzhi Zhang
Songming Zhang
Yunlong Liang
Fandong Meng
Yufeng Chen
Jinan Xu
Jie Zhou
24
0
0
15 Apr 2025
HDC: Hierarchical Distillation for Multi-level Noisy Consistency in Semi-Supervised Fetal Ultrasound Segmentation
HDC: Hierarchical Distillation for Multi-level Noisy Consistency in Semi-Supervised Fetal Ultrasound Segmentation
Tran Quoc Khanh Le
Nguyen Lan Vi Vu
Ha-Hieu Pham
Xuan-Loc Huynh
T. Nguyen
Minh Huu Nhat Le
Quan Nguyen
Hien Nguyen
46
0
0
14 Apr 2025
NCAP: Scene Text Image Super-Resolution with Non-CAtegorical Prior
NCAP: Scene Text Image Super-Resolution with Non-CAtegorical Prior
Dongwoo Park
Suk Pil Ko
153
0
0
01 Apr 2025
SCJD: Sparse Correlation and Joint Distillation for Efficient 3D Human Pose Estimation
SCJD: Sparse Correlation and Joint Distillation for Efficient 3D Human Pose Estimation
Weihong Chen
Xuemiao Xu
Haoxin Yang
Yi Xie
Peng Xiao
Cheng Xu
Huaidong Zhang
Pheng-Ann Heng
3DH
64
0
0
18 Mar 2025
Adaptive Temperature Based on Logits Correlation in Knowledge Distillation
Kazuhiro Matsuyama
Usman Anjum
Satoko Matsuyama
Tetsuo Shoda
J. Zhan
57
0
0
12 Mar 2025
VRM: Knowledge Distillation via Virtual Relation Matching
VRM: Knowledge Distillation via Virtual Relation Matching
W. Zhang
Fei Xie
Weidong Cai
Chao Ma
76
0
0
28 Feb 2025
Multi-Level Decoupled Relational Distillation for Heterogeneous Architectures
Yaoxin Yang
Peng Ye
Weihao Lin
Kangcong Li
Yan Wen
Jia Hao
Tao Chen
38
0
0
10 Feb 2025
Contrastive Representation Distillation via Multi-Scale Feature Decoupling
Contrastive Representation Distillation via Multi-Scale Feature Decoupling
Cuipeng Wang
Tieyuan Chen
Haipeng Wang
51
0
0
09 Feb 2025
Rethinking Knowledge in Distillation: An In-context Sample Retrieval Perspective
Rethinking Knowledge in Distillation: An In-context Sample Retrieval Perspective
Jinjing Zhu
Songze Li
Lin Wang
47
0
0
13 Jan 2025
Cross-View Consistency Regularisation for Knowledge Distillation
Cross-View Consistency Regularisation for Knowledge Distillation
W. Zhang
Dongnan Liu
Weidong Cai
Chao Ma
73
1
0
21 Dec 2024
Neural Collapse Inspired Knowledge Distillation
Neural Collapse Inspired Knowledge Distillation
Shuoxi Zhang
Zijian Song
Kun He
69
1
0
16 Dec 2024
Dynamic Textual Prompt For Rehearsal-free Lifelong Person
  Re-identification
Dynamic Textual Prompt For Rehearsal-free Lifelong Person Re-identification
Hongyu Chen
Bingliang Jiao
Wenxuan Wang
Peng Wang
VLM
43
0
0
09 Nov 2024
TAS: Distilling Arbitrary Teacher and Student via a Hybrid Assistant
TAS: Distilling Arbitrary Teacher and Student via a Hybrid Assistant
Guopeng Li
Qiang Wang
K. Yan
Shouhong Ding
Yuan Gao
Gui-Song Xia
41
0
0
16 Oct 2024
Distilling Invariant Representations with Dual Augmentation
Distilling Invariant Representations with Dual Augmentation
Nikolaos Giakoumoglou
Tania Stathaki
28
0
0
12 Oct 2024
Gap Preserving Distillation by Building Bidirectional Mappings with A
  Dynamic Teacher
Gap Preserving Distillation by Building Bidirectional Mappings with A Dynamic Teacher
Yong Guo
Shulian Zhang
Haolin Pan
Jing Liu
Yulun Zhang
Jian Chen
35
0
0
05 Oct 2024
Classroom-Inspired Multi-Mentor Distillation with Adaptive Learning Strategies
Classroom-Inspired Multi-Mentor Distillation with Adaptive Learning Strategies
Shalini Sarode
Muhammad Saif Ullah Khan
Tahira Shehzadi
Didier Stricker
Muhammad Zeshan Afzal
41
0
0
30 Sep 2024
Conditional Image Synthesis with Diffusion Models: A Survey
Conditional Image Synthesis with Diffusion Models: A Survey
Zheyuan Zhan
Defang Chen
Jian-Ping Mei
Zhenghe Zhao
Jiawei Chen
Chun Chen
Siwei Lyu
Can Wang
VLM
45
5
0
28 Sep 2024
Kendall's $τ$ Coefficient for Logits Distillation
Kendall's τττ Coefficient for Logits Distillation
Yuchen Guan
Runxi Cheng
Kang Liu
Chun Yuan
33
0
0
26 Sep 2024
Learn from Balance: Rectifying Knowledge Transfer for Long-Tailed
  Scenarios
Learn from Balance: Rectifying Knowledge Transfer for Long-Tailed Scenarios
Xinlei Huang
Jialiang Tang
Xubin Zheng
Jinjia Zhou
Wenxin Yu
Ning Jiang
35
0
0
12 Sep 2024
LoCa: Logit Calibration for Knowledge Distillation
LoCa: Logit Calibration for Knowledge Distillation
Runming Yang
Taiqiang Wu
Yujiu Yang
40
1
0
07 Sep 2024
Knowledge Distillation with Refined Logits
Knowledge Distillation with Refined Logits
Wujie Sun
Defang Chen
Siwei Lyu
Genlang Chen
Chun-Yen Chen
Can Wang
34
1
0
14 Aug 2024
Deep Companion Learning: Enhancing Generalization Through Historical
  Consistency
Deep Companion Learning: Enhancing Generalization Through Historical Consistency
Ruizhao Zhu
Venkatesh Saligrama
FedML
34
0
0
26 Jul 2024
CoMoTo: Unpaired Cross-Modal Lesion Distillation Improves Breast Lesion
  Detection in Tomosynthesis
CoMoTo: Unpaired Cross-Modal Lesion Distillation Improves Breast Lesion Detection in Tomosynthesis
Muhammad Alberb
Marawan Elbatel
Aya Elgebaly
R. Montoya-del-Angel
Xiaomeng Li
Robert Martí
21
0
0
24 Jul 2024
Relational Representation Distillation
Relational Representation Distillation
Nikolaos Giakoumoglou
Tania Stathaki
37
0
0
16 Jul 2024
MutDet: Mutually Optimizing Pre-training for Remote Sensing Object
  Detection
MutDet: Mutually Optimizing Pre-training for Remote Sensing Object Detection
Ziyue Huang
Yongchao Feng
Qingjie Liu
Yunhong Wang
ViT
64
1
0
13 Jul 2024
Topological Persistence Guided Knowledge Distillation for Wearable
  Sensor Data
Topological Persistence Guided Knowledge Distillation for Wearable Sensor Data
Eun Som Jeon
Hongjun Choi
A. Shukla
Yuan Wang
Hyunglae Lee
M. Buman
P. Turaga
29
3
0
07 Jul 2024
Dual-Space Knowledge Distillation for Large Language Models
Dual-Space Knowledge Distillation for Large Language Models
Songming Zhang
Xue Zhang
Zengkui Sun
Yufeng Chen
Jinan Xu
42
5
0
25 Jun 2024
Adaptive Teaching with Shared Classifier for Knowledge Distillation
Adaptive Teaching with Shared Classifier for Knowledge Distillation
Jaeyeon Jang
Young-Ik Kim
Jisu Lim
Hyeonseong Lee
21
0
0
12 Jun 2024
DistilDoc: Knowledge Distillation for Visually-Rich Document Applications
DistilDoc: Knowledge Distillation for Visually-Rich Document Applications
Jordy Van Landeghem
Subhajit Maity
Ayan Banerjee
Matthew Blaschko
Marie-Francine Moens
Josep Lladós
Sanket Biswas
50
2
0
12 Jun 2024
Tiny models from tiny data: Textual and null-text inversion for few-shot distillation
Tiny models from tiny data: Textual and null-text inversion for few-shot distillation
Erik Landolsi
Fredrik Kahl
DiffM
58
1
0
05 Jun 2024
Estimating Human Poses Across Datasets: A Unified Skeleton and
  Multi-Teacher Distillation Approach
Estimating Human Poses Across Datasets: A Unified Skeleton and Multi-Teacher Distillation Approach
Muhammad Gul Zain Ali Khan
Dhavalkumar Limbachiya
Didier Stricker
Muhammad Zeshan Afzal
3DH
40
0
0
30 May 2024
Exploring Graph-based Knowledge: Multi-Level Feature Distillation via
  Channels Relational Graph
Exploring Graph-based Knowledge: Multi-Level Feature Distillation via Channels Relational Graph
Zhiwei Wang
Jun Huang
Longhua Ma
Chengyu Wu
Hongyu Ma
35
0
0
14 May 2024
From Algorithm to Hardware: A Survey on Efficient and Safe Deployment of
  Deep Neural Networks
From Algorithm to Hardware: A Survey on Efficient and Safe Deployment of Deep Neural Networks
Xue Geng
Zhe Wang
Chunyun Chen
Qing Xu
Kaixin Xu
...
Zhenghua Chen
M. Aly
Jie Lin
Min-man Wu
Xiaoli Li
33
1
0
09 May 2024
CKD: Contrastive Knowledge Distillation from A Sample-wise Perspective
CKD: Contrastive Knowledge Distillation from A Sample-wise Perspective
Wencheng Zhu
Xin Zhou
Pengfei Zhu
Yu Wang
Qinghua Hu
VLM
64
1
0
22 Apr 2024
Dynamic Temperature Knowledge Distillation
Dynamic Temperature Knowledge Distillation
Yukang Wei
Yu Bai
32
4
0
19 Apr 2024
Lightweight Deep Learning for Resource-Constrained Environments: A
  Survey
Lightweight Deep Learning for Resource-Constrained Environments: A Survey
Hou-I Liu
Marco Galindo
Hongxia Xie
Lai-Kuan Wong
Hong-Han Shuai
Yung-Hui Li
Wen-Huang Cheng
58
48
0
08 Apr 2024
On the Surprising Efficacy of Distillation as an Alternative to
  Pre-Training Small Models
On the Surprising Efficacy of Distillation as an Alternative to Pre-Training Small Models
Sean Farhat
Deming Chen
42
0
0
04 Apr 2024
Improve Knowledge Distillation via Label Revision and Data Selection
Improve Knowledge Distillation via Label Revision and Data Selection
Weichao Lan
Yiu-ming Cheung
Qing Xu
Buhua Liu
Zhikai Hu
Mengke Li
Zhenghua Chen
37
2
0
03 Apr 2024
Scale Decoupled Distillation
Scale Decoupled Distillation
Shicai Wei
47
4
0
20 Mar 2024
CrossGLG: LLM Guides One-shot Skeleton-based 3D Action Recognition in a
  Cross-level Manner
CrossGLG: LLM Guides One-shot Skeleton-based 3D Action Recognition in a Cross-level Manner
Tingbing Yan
Wenzheng Zeng
Yang Xiao
Xingyu Tong
Bo Tan
Zhiwen Fang
Zhiguo Cao
Qiufeng Wang
31
5
0
15 Mar 2024
Attention-guided Feature Distillation for Semantic Segmentation
Attention-guided Feature Distillation for Semantic Segmentation
Amir M. Mansourian
Arya Jalali
Rozhan Ahmadi
S. Kasaei
30
0
0
08 Mar 2024
PromptKD: Unsupervised Prompt Distillation for Vision-Language Models
PromptKD: Unsupervised Prompt Distillation for Vision-Language Models
Zheng Li
Xiang Li
Xinyi Fu
Xing Zhang
Weiqiang Wang
Shuo Chen
Jian Yang
VLM
39
35
0
05 Mar 2024
Logit Standardization in Knowledge Distillation
Logit Standardization in Knowledge Distillation
Shangquan Sun
Wenqi Ren
Jingzhi Li
Rui Wang
Xiaochun Cao
37
56
0
03 Mar 2024
GraphKD: Exploring Knowledge Distillation Towards Document Object
  Detection with Structured Graph Creation
GraphKD: Exploring Knowledge Distillation Towards Document Object Detection with Structured Graph Creation
Ayan Banerjee
Sanket Biswas
Josep Lladós
Umapada Pal
40
1
0
17 Feb 2024
Direct Distillation between Different Domains
Direct Distillation between Different Domains
Jialiang Tang
Shuo Chen
Gang Niu
Hongyuan Zhu
Qiufeng Wang
Chen Gong
Masashi Sugiyama
55
3
0
12 Jan 2024
Knowledge Translation: A New Pathway for Model Compression
Knowledge Translation: A New Pathway for Model Compression
Wujie Sun
Defang Chen
Jiawei Chen
Yan Feng
Chun-Yen Chen
Can Wang
25
0
0
11 Jan 2024
Cloud-Device Collaborative Learning for Multimodal Large Language Models
Cloud-Device Collaborative Learning for Multimodal Large Language Models
Guanqun Wang
Jiaming Liu
Chenxuan Li
Junpeng Ma
Yuan Zhang
...
Kevin Zhang
Maurice Chong
Ray Zhang
Yijiang Liu
Shanghang Zhang
41
7
0
26 Dec 2023
Rethinking the Domain Gap in Near-infrared Face Recognition
Rethinking the Domain Gap in Near-infrared Face Recognition
Michail Tarasiou
Jiankang Deng
S. Zafeiriou
19
1
0
01 Dec 2023
Cosine Similarity Knowledge Distillation for Individual Class
  Information Transfer
Cosine Similarity Knowledge Distillation for Individual Class Information Transfer
Gyeongdo Ham
Seonghak Kim
Suin Lee
Jae-Hyeok Lee
Daeshik Kim
29
5
0
24 Nov 2023
12
Next