ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.10536
  4. Cited By
Knowledge Distillation from A Stronger Teacher
v1v2v3 (latest)

Knowledge Distillation from A Stronger Teacher

21 May 2022
Tao Huang
Shan You
Fei Wang
Chao Qian
Chang Xu
ArXiv (abs)PDFHTMLGithub (146★)

Papers citing "Knowledge Distillation from A Stronger Teacher"

50 / 131 papers shown
Title
Ground Reaction Force Estimation via Time-aware Knowledge Distillation
Ground Reaction Force Estimation via Time-aware Knowledge Distillation
Eun Som Jeon
Sinjini Mitra
Jisoo Lee
Omik M. Save
Ankita Shukla
Hyunglae Lee
Pavan Turaga
128
0
0
12 Jun 2025
TableDreamer: Progressive and Weakness-guided Data Synthesis from Scratch for Table Instruction Tuning
Mingyu Zheng
Zhifan Feng
Jia Wang
Lanrui Wang
Zheng Lin
Yang Hao
Weiping Wang
LMTD
55
0
0
10 Jun 2025
ReStNet: A Reusable & Stitchable Network for Dynamic Adaptation on IoT Devices
ReStNet: A Reusable & Stitchable Network for Dynamic Adaptation on IoT Devices
Maoyu Wang
Yao Lu
Jiaqi Nie
Zeyu Wang
Yun Lin
Qi Xuan
Guan Gui
29
0
0
08 Jun 2025
Revisiting Cross-Modal Knowledge Distillation: A Disentanglement Approach for RGBD Semantic Segmentation
Revisiting Cross-Modal Knowledge Distillation: A Disentanglement Approach for RGBD Semantic Segmentation
Roger Ferrod
C. Dantas
Luigi Di Caro
Dino Ienco
27
0
0
30 May 2025
Proxy-FDA: Proxy-based Feature Distribution Alignment for Fine-tuning Vision Foundation Models without Forgetting
Proxy-FDA: Proxy-based Feature Distribution Alignment for Fine-tuning Vision Foundation Models without Forgetting
Chen Huang
Skyler Seto
Hadi Pouransari
Mehrdad Farajtabar
Raviteja Vemulapalli
Fartash Faghri
Oncel Tuzel
B. Theobald
Josh Susskind
CLL
56
0
0
30 May 2025
A Cross Modal Knowledge Distillation & Data Augmentation Recipe for Improving Transcriptomics Representations through Morphological Features
A Cross Modal Knowledge Distillation & Data Augmentation Recipe for Improving Transcriptomics Representations through Morphological Features
Ihab Bendidi
Yassir El Mesbahi
Alisandra K. Denton
Karush Suri
Kian Kenyon-Dean
Auguste Genovesio
Emmanuel Noutahi
15
0
0
27 May 2025
FiGKD: Fine-Grained Knowledge Distillation via High-Frequency Detail Transfer
FiGKD: Fine-Grained Knowledge Distillation via High-Frequency Detail Transfer
Seonghak Kim
127
0
0
17 May 2025
Simple Semi-supervised Knowledge Distillation from Vision-Language Models via $\mathbf{\texttt{D}}$ual-$\mathbf{\texttt{H}}$ead $\mathbf{\texttt{O}}$ptimization
Simple Semi-supervised Knowledge Distillation from Vision-Language Models via D\mathbf{\texttt{D}}Dual-H\mathbf{\texttt{H}}Head O\mathbf{\texttt{O}}Optimization
Seongjae Kang
Dong Bok Lee
Hyungjoon Jang
Sung Ju Hwang
VLM
114
1
0
12 May 2025
Onboard Optimization and Learning: A Survey
Onboard Optimization and Learning: A Survey
Monirul Islam Pavel
Siyi Hu
Mahardhika Pratama
Ryszard Kowalczyk
70
0
0
07 May 2025
ABKD: Pursuing a Proper Allocation of the Probability Mass in Knowledge Distillation via $α$-$β$-Divergence
ABKD: Pursuing a Proper Allocation of the Probability Mass in Knowledge Distillation via ααα-βββ-Divergence
Guanghui Wang
Zhiyong Yang
Ziyi Wang
Shi Wang
Qianqian Xu
Qingming Huang
297
0
0
07 May 2025
Head-Tail-Aware KL Divergence in Knowledge Distillation for Spiking Neural Networks
Head-Tail-Aware KL Divergence in Knowledge Distillation for Spiking Neural Networks
Tianqing Zhang
Zixin Zhu
Kairong Yu
Hongwei Wang
468
0
0
29 Apr 2025
HDC: Hierarchical Distillation for Multi-level Noisy Consistency in Semi-Supervised Fetal Ultrasound Segmentation
HDC: Hierarchical Distillation for Multi-level Noisy Consistency in Semi-Supervised Fetal Ultrasound Segmentation
Tran Quoc Khanh Le
Nguyen Lan Vi Vu
Ha-Hieu Pham
Xuan-Loc Huynh
T. Nguyen
Minh Huu Nhat Le
Quan Nguyen
Hien Nguyen
96
0
0
14 Apr 2025
Revisiting the Relationship between Adversarial and Clean Training: Why Clean Training Can Make Adversarial Training Better
Revisiting the Relationship between Adversarial and Clean Training: Why Clean Training Can Make Adversarial Training Better
MingWei Zhou
Xiaobing Pei
AAML
456
0
0
30 Mar 2025
Delving Deep into Semantic Relation Distillation
Delving Deep into Semantic Relation Distillation
Zhaoyi Yan
Kangjun Liu
Qixiang Ye
89
0
0
27 Mar 2025
CustomKD: Customizing Large Vision Foundation for Edge Model Improvement via Knowledge Distillation
CustomKD: Customizing Large Vision Foundation for Edge Model Improvement via Knowledge Distillation
Jungsoo Lee
Debasmit Das
Munawar Hayat
Sungha Choi
Kyuwoong Hwang
Fatih Porikli
VLM
110
1
0
23 Mar 2025
Sparse Logit Sampling: Accelerating Knowledge Distillation in LLMs
Sparse Logit Sampling: Accelerating Knowledge Distillation in LLMs
Anshumann
Mohd Abbas Zaidi
Akhil Kedia
Jinwoo Ahn
Taehwak Kwon
Kangwook Lee
Haejun Lee
Joohyung Lee
FedML
434
1
0
21 Mar 2025
Uncertainty Quantification and Confidence Calibration in Large Language Models: A Survey
Uncertainty Quantification and Confidence Calibration in Large Language Models: A Survey
Xiaoou Liu
Tiejin Chen
Longchao Da
Chacha Chen
Zhen Lin
Hua Wei
HILM
146
8
0
20 Mar 2025
AUTV: Creating Underwater Video Datasets with Pixel-wise Annotations
AUTV: Creating Underwater Video Datasets with Pixel-wise Annotations
Quang-Trung Truong
Wong Yuk Kwan
Duc Thanh Nguyen
Binh-Son Hua
Sai-Kit Yeung
VGen
115
0
0
17 Mar 2025
Asymmetric Decision-Making in Online Knowledge Distillation:Unifying Consensus and Divergence
Zhaowei Chen
Borui Zhao
Yuchen Ge
Yuhao Chen
Renjie Song
Jiajun Liang
81
0
0
09 Mar 2025
Partial Convolution Meets Visual Attention
Haiduo Huang
Fuwei Yang
D. Li
Ji Liu
Lu Tian
Jinzhang Peng
Pengju Ren
E. Barsoum
3DH
448
0
0
05 Mar 2025
Real-Time Aerial Fire Detection on Resource-Constrained Devices Using Knowledge Distillation
Real-Time Aerial Fire Detection on Resource-Constrained Devices Using Knowledge Distillation
Sabina Jangirova
Branislava Jankovic
Waseem Ullah
Latif U. Khan
Mohsen Guizani
71
0
0
28 Feb 2025
VRM: Knowledge Distillation via Virtual Relation Matching
VRM: Knowledge Distillation via Virtual Relation Matching
W. Zhang
Fei Xie
Weidong Cai
Chao Ma
215
0
0
28 Feb 2025
Multi-Teacher Knowledge Distillation with Reinforcement Learning for Visual Recognition
Multi-Teacher Knowledge Distillation with Reinforcement Learning for Visual Recognition
Chuanguang Yang
Xinqiang Yu
Han Yang
Zhulin An
Chengqing Yu
Libo Huang
Yongjun Xu
108
3
0
22 Feb 2025
Multi-Level Decoupled Relational Distillation for Heterogeneous Architectures
Yaoxin Yang
Peng Ye
Weihao Lin
Kangcong Li
Yan Wen
Jia Hao
Tao Chen
98
0
0
10 Feb 2025
Rethinking Knowledge in Distillation: An In-context Sample Retrieval Perspective
Rethinking Knowledge in Distillation: An In-context Sample Retrieval Perspective
Jinjing Zhu
Songze Li
Lin Wang
103
0
0
13 Jan 2025
Knowledge Distillation with Adapted Weight
Sirong Wu
Xi Luo
Junjie Liu
Yuhui Deng
136
0
0
06 Jan 2025
Cross-View Consistency Regularisation for Knowledge Distillation
Cross-View Consistency Regularisation for Knowledge Distillation
W. Zhang
Dongnan Liu
Weidong Cai
Chao Ma
164
2
0
21 Dec 2024
Neural Collapse Inspired Knowledge Distillation
Neural Collapse Inspired Knowledge Distillation
Shuoxi Zhang
Zijian Song
Kun He
181
1
0
16 Dec 2024
Wasserstein Distance Rivals Kullback-Leibler Divergence for Knowledge
  Distillation
Wasserstein Distance Rivals Kullback-Leibler Divergence for Knowledge Distillation
Jiaming Lv
Haoyuan Yang
P. Li
162
3
0
11 Dec 2024
Lightweight Contenders: Navigating Semi-Supervised Text Mining through
  Peer Collaboration and Self Transcendence
Lightweight Contenders: Navigating Semi-Supervised Text Mining through Peer Collaboration and Self Transcendence
Qianren Mao
Weifeng Jiang
Qingbin Liu
Chenghua Lin
Qian Li
Xianqing Wen
Jianxin Li
Jinhu Lu
112
0
0
01 Dec 2024
Exploring Feature-based Knowledge Distillation for Recommender System: A Frequency Perspective
Exploring Feature-based Knowledge Distillation for Recommender System: A Frequency Perspective
Zhangchi Zhu
Wei Zhang
162
0
0
16 Nov 2024
Dual-Head Knowledge Distillation: Enhancing Logits Utilization with an Auxiliary Head
Dual-Head Knowledge Distillation: Enhancing Logits Utilization with an Auxiliary Head
Penghui Yang
Chen-Chen Zong
Sheng-Jun Huang
Lei Feng
Bo An
147
1
0
13 Nov 2024
Over-parameterized Student Model via Tensor Decomposition Boosted
  Knowledge Distillation
Over-parameterized Student Model via Tensor Decomposition Boosted Knowledge Distillation
Yu-Liang Zhan
Zhong-Yi Lu
Hao Sun
Ze-Feng Gao
87
0
0
10 Nov 2024
Semantic Knowledge Distillation for Onboard Satellite Earth Observation
  Image Classification
Semantic Knowledge Distillation for Onboard Satellite Earth Observation Image Classification
Thanh-Dung Le
Vu Nguyen Ha
T. Nguyen
G. Eappen
P. Thiruvasagam
...
Duc-Dung Tran
Luis Manuel Garcés Socarrás
J. L. González-Rios
JUAN CARLOS MERLANO DUNCAN
Symeon Chatzinotas
37
2
0
31 Oct 2024
TAS: Distilling Arbitrary Teacher and Student via a Hybrid Assistant
TAS: Distilling Arbitrary Teacher and Student via a Hybrid Assistant
Guopeng Li
Qiang Wang
K. Yan
Shouhong Ding
Yuan Gao
Gui-Song Xia
89
0
0
16 Oct 2024
SNN-PAR: Energy Efficient Pedestrian Attribute Recognition via Spiking
  Neural Networks
SNN-PAR: Energy Efficient Pedestrian Attribute Recognition via Spiking Neural Networks
Haiyang Wang
Qian Zhu
Mowen She
Yabo Li
Haoyu Song
Minghe Xu
Xiao Wang
ViT
64
0
0
10 Oct 2024
Efficient and Robust Knowledge Distillation from A Stronger Teacher
  Based on Correlation Matching
Efficient and Robust Knowledge Distillation from A Stronger Teacher Based on Correlation Matching
Wenqi Niu
Yingchao Wang
Guohui Cai
Hanpo Hou
56
1
0
09 Oct 2024
Gap Preserving Distillation by Building Bidirectional Mappings with A
  Dynamic Teacher
Gap Preserving Distillation by Building Bidirectional Mappings with A Dynamic Teacher
Yong Guo
Shulian Zhang
Haolin Pan
Jing Liu
Yulun Zhang
Jian Chen
89
0
0
05 Oct 2024
Dataset Distillation via Knowledge Distillation: Towards Efficient Self-Supervised Pre-Training of Deep Networks
Dataset Distillation via Knowledge Distillation: Towards Efficient Self-Supervised Pre-Training of Deep Networks
S. Joshi
Jiayi Ni
Baharan Mirzasoleiman
DD
198
2
0
03 Oct 2024
Fair4Free: Generating High-fidelity Fair Synthetic Samples using Data
  Free Distillation
Fair4Free: Generating High-fidelity Fair Synthetic Samples using Data Free Distillation
Md Fahim Sikder
Daniel de Leng
Fredrik Heintz
84
1
0
02 Oct 2024
Classroom-Inspired Multi-Mentor Distillation with Adaptive Learning Strategies
Classroom-Inspired Multi-Mentor Distillation with Adaptive Learning Strategies
Shalini Sarode
Muhammad Saif Ullah Khan
Tahira Shehzadi
Didier Stricker
Muhammad Zeshan Afzal
113
0
0
30 Sep 2024
Simple Unsupervised Knowledge Distillation With Space Similarity
Simple Unsupervised Knowledge Distillation With Space Similarity
Aditya Singh
Haohan Wang
144
2
0
20 Sep 2024
EFCM: Efficient Fine-tuning on Compressed Models for deployment of large
  models in medical image analysis
EFCM: Efficient Fine-tuning on Compressed Models for deployment of large models in medical image analysis
Shaojie Li
Zhaoshuo Diao
66
0
0
18 Sep 2024
LoCa: Logit Calibration for Knowledge Distillation
LoCa: Logit Calibration for Knowledge Distillation
Runming Yang
Taiqiang Wu
Yujiu Yang
83
1
0
07 Sep 2024
Adaptive Explicit Knowledge Transfer for Knowledge Distillation
Adaptive Explicit Knowledge Transfer for Knowledge Distillation
H. Park
Jong-seok Lee
60
1
0
03 Sep 2024
PRG: Prompt-Based Distillation Without Annotation via Proxy Relational
  Graph
PRG: Prompt-Based Distillation Without Annotation via Proxy Relational Graph
Yijin Xu
Jialun Liu
Hualiang Wei
Wenhui Li
84
0
0
22 Aug 2024
Computer Vision Model Compression Techniques for Embedded Systems: A
  Survey
Computer Vision Model Compression Techniques for Embedded Systems: A Survey
Alexandre Lopes
Fernando Pereira dos Santos
D. Oliveira
Mauricio Schiezaro
Hélio Pedrini
88
11
0
15 Aug 2024
Training Spatial-Frequency Visual Prompts and Probabilistic Clusters for
  Accurate Black-Box Transfer Learning
Training Spatial-Frequency Visual Prompts and Probabilistic Clusters for Accurate Black-Box Transfer Learning
Wonwoo Cho
Kangyeol Kim
Saemee Choi
Jaegul Choo
VLM
108
0
0
15 Aug 2024
DisCoM-KD: Cross-Modal Knowledge Distillation via Disentanglement
  Representation and Adversarial Learning
DisCoM-KD: Cross-Modal Knowledge Distillation via Disentanglement Representation and Adversarial Learning
Dino Ienco
C. Dantas
85
4
0
05 Aug 2024
Distilling Vision-Language Foundation Models: A Data-Free Approach via
  Prompt Diversification
Distilling Vision-Language Foundation Models: A Data-Free Approach via Prompt Diversification
Yunyi Xuan
Weijie Chen
Shicai Yang
Di Xie
Luojun Lin
Yueting Zhuang
VLM
116
4
0
21 Jul 2024
123
Next