Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1910.01348
Cited By
On the Efficacy of Knowledge Distillation
3 October 2019
Ligang He
Rui Mao
Re-assign community
ArXiv
PDF
HTML
Papers citing
"On the Efficacy of Knowledge Distillation"
50 / 319 papers shown
Title
Exploring Causes of Representational Similarity in Machine Learning Models
Zeyu Michael Li
Hung Anh Vu
Damilola Awofisayo
Emily Wenger
CML
25
0
0
20 May 2025
Distilled Circuits: A Mechanistic Study of Internal Restructuring in Knowledge Distillation
Reilly Haskins
Benjamin Adams
16
0
0
16 May 2025
Simple Semi-supervised Knowledge Distillation from Vision-Language Models via
D
\mathbf{\texttt{D}}
D
ual-
H
\mathbf{\texttt{H}}
H
ead
O
\mathbf{\texttt{O}}
O
ptimization
Seongjae Kang
Dong Bok Lee
Hyungjoon Jang
Sung Ju Hwang
VLM
58
0
0
12 May 2025
ABKD: Pursuing a Proper Allocation of the Probability Mass in Knowledge Distillation via
α
α
α
-
β
β
β
-Divergence
Guanghui Wang
Zhiyong Yang
Zihan Wang
Shi Wang
Qianqian Xu
Qingming Huang
42
0
0
07 May 2025
Scaling Laws for Data-Efficient Visual Transfer Learning
Wenxuan Yang
Qingqu Wei
Chenxi Ma
Weimin Tan
Bo Yan
33
0
0
17 Apr 2025
Cycle Training with Semi-Supervised Domain Adaptation: Bridging Accuracy and Efficiency for Real-Time Mobile Scene Detection
Huu-Phong Phan-Nguyen
Anh Dao
T. Nguyen
Tuan Quang
H. Tran
Tinh-Anh Nguyen-Nhu
Huy-Thach Pham
Quan Nguyen
Hoang M. Le
Quang-Vinh Dinh
39
0
0
12 Apr 2025
An Efficient Training Algorithm for Models with Block-wise Sparsity
Ding Zhu
Zhiqun Zuo
Mohammad Mahdi Khalili
42
0
0
27 Mar 2025
CustomKD: Customizing Large Vision Foundation for Edge Model Improvement via Knowledge Distillation
Jungsoo Lee
Debasmit Das
Munawar Hayat
Sungha Choi
Kyuwoong Hwang
Fatih Porikli
VLM
68
1
0
23 Mar 2025
Efficient Knowledge Distillation via Curriculum Extraction
Shivam Gupta
Sushrut Karmalkar
47
0
0
21 Mar 2025
Moss: Proxy Model-based Full-Weight Aggregation in Federated Learning with Heterogeneous Models
Y. Cai
Ziqi Zhang
Ding Li
Yao Guo
Xiangqun Chen
55
0
0
13 Mar 2025
Knowledge Consultation for Semi-Supervised Semantic Segmentation
Thuan Than
Nhat-Anh Nguyen-Dang
Dung Nguyen
Salwa K. Al Khatib
Ahmed Elhagry
Hai T. Phan
Yihui He
Zhiqiang Shen
Marios Savvides
Dang T. Huynh
VLM
63
0
0
12 Mar 2025
Distilling Knowledge into Quantum Vision Transformers for Biomedical Image Classification
Thomas Boucher
Evangelos B. Mazomenos
46
0
0
10 Mar 2025
Temporal Separation with Entropy Regularization for Knowledge Distillation in Spiking Neural Networks
Kairong Yu
Chengting Yu
Tianqing Zhang
Xiaochen Zhao
Shu Yang
Hongwei Wang
Qiang Zhang
Qi Xu
74
3
0
05 Mar 2025
FlexiDiT: Your Diffusion Transformer Can Easily Generate High-Quality Samples with Less Compute
Sotiris Anagnostidis
Gregor Bachmann
Yeongmin Kim
Jonas Kohler
Markos Georgopoulos
A. Sanakoyeu
Yuming Du
Albert Pumarola
Ali K. Thabet
Edgar Schönfeld
92
0
0
27 Feb 2025
A Transformer-in-Transformer Network Utilizing Knowledge Distillation for Image Recognition
Dewan Tauhid Rahman
Yeahia Sarker
Antar Mazumder
Md. Shamim Anower
ViT
53
0
0
24 Feb 2025
Multi-Teacher Knowledge Distillation with Reinforcement Learning for Visual Recognition
Chuanguang Yang
Xinqiang Yu
Han Yang
Zhulin An
Chengqing Yu
Libo Huang
Yongjun Xu
36
1
0
22 Feb 2025
TAID: Temporally Adaptive Interpolated Distillation for Efficient Knowledge Transfer in Language Models
Makoto Shing
Kou Misaki
Han Bao
Sho Yokoi
Takuya Akiba
VLM
57
1
0
28 Jan 2025
Edge Graph Intelligence: Reciprocally Empowering Edge Networks with Graph Intelligence
Liekang Zeng
Shengyuan Ye
Xu Chen
Xiaoxi Zhang
Ju Ren
Jian Tang
Yang Yang
Xuemin
Shen
60
2
0
08 Jan 2025
Knowledge Distillation with Adapted Weight
Sirong Wu
Xi Luo
Junjie Liu
Yuhui Deng
48
0
0
06 Jan 2025
Cross-View Consistency Regularisation for Knowledge Distillation
W. Zhang
Dongnan Liu
Weidong Cai
Chao Ma
73
1
0
21 Dec 2024
GazeGen: Gaze-Driven User Interaction for Visual Content Generation
He-Yen Hsieh
Ziyun Li
Sai Qian Zhang
W. Ting
Kao-Den Chang
B. D. Salvo
Chiao Liu
H. T. Kung
VGen
35
0
0
07 Nov 2024
Centerness-based Instance-aware Knowledge Distillation with Task-wise Mutual Lifting for Object Detection on Drone Imagery
Bowei Du
Zhixuan Liao
Yanan Zhang
Zhi Cai
Jiaxin Chen
Di Huang
41
0
0
05 Nov 2024
Toward Robust Incomplete Multimodal Sentiment Analysis via Hierarchical Representation Learning
Mingxing Li
Dingkang Yang
Y. Liu
Shunli Wang
Jiawei Chen
...
Xiaolu Hou
Mingyang Sun
Ziyun Qian
Dongliang Kou
Li Zhang
37
1
0
05 Nov 2024
Scale-Aware Recognition in Satellite Images under Resource Constraints
Shreelekha Revankar
Cheng Perng Phoo
Utkarsh Mall
Bharath Hariharan
Kavita Bala
29
0
0
31 Oct 2024
Multi-Level Feature Distillation of Joint Teachers Trained on Distinct Image Datasets
Adrian Iordache
B. Alexe
Radu Tudor Ionescu
36
1
0
29 Oct 2024
Adversarial Training: A Survey
Mengnan Zhao
Lihe Zhang
Jingwen Ye
Huchuan Lu
Baocai Yin
Xinchao Wang
AAML
38
1
0
19 Oct 2024
Towards Satellite Non-IID Imagery: A Spectral Clustering-Assisted Federated Learning Approach
Luyao Zou
Yu Min Park
Chu Myaet Thwal
Y. Tun
Zhu Han
Choong Seon Hong
33
0
0
17 Oct 2024
Cyber Attacks Prevention Towards Prosumer-based EV Charging Stations: An Edge-assisted Federated Prototype Knowledge Distillation Approach
Luyao Zou
Quang Hieu Vo
Kitae Kim
Huy Q. Le
Chu Myaet Thwal
Chaoning Zhang
Choong Seon Hong
39
1
0
17 Oct 2024
Distilling Invariant Representations with Dual Augmentation
Nikolaos Giakoumoglou
Tania Stathaki
28
0
0
12 Oct 2024
GAI-Enabled Explainable Personalized Federated Semi-Supervised Learning
Yubo Peng
Feibo Jiang
Li Dong
Kezhi Wang
Kun Yang
FedML
38
0
0
11 Oct 2024
Efficient and Robust Knowledge Distillation from A Stronger Teacher Based on Correlation Matching
Wenqi Niu
Yingchao Wang
Guohui Cai
Hanpo Hou
29
0
0
09 Oct 2024
Gap Preserving Distillation by Building Bidirectional Mappings with A Dynamic Teacher
Yong Guo
Shulian Zhang
Haolin Pan
Jing Liu
Yulun Zhang
Jian Chen
38
0
0
05 Oct 2024
Tuning Timestep-Distilled Diffusion Model Using Pairwise Sample Optimization
Zichen Miao
Zhengyuan Yang
Kevin Lin
Ze Wang
Zicheng Liu
Lijuan Wang
Qiang Qiu
48
3
0
04 Oct 2024
Harmonizing knowledge Transfer in Neural Network with Unified Distillation
Yaomin Huang
Zaomin Yan
Chaomin Shen
Faming Fang
Guixu Zhang
34
0
0
27 Sep 2024
Simple Unsupervised Knowledge Distillation With Space Similarity
Aditya Singh
Haohan Wang
31
1
0
20 Sep 2024
Efficient Knowledge Distillation: Empowering Small Language Models with Teacher Model Insights
Mohamad Ballout
U. Krumnack
Gunther Heidemann
Kai-Uwe Kühnberger
35
2
0
19 Sep 2024
Integrated Multi-Level Knowledge Distillation for Enhanced Speaker Verification
Wenhao Yang
Jianguo Wei
Wenhuan Lu
Xugang Lu
Lei Li
33
0
0
14 Sep 2024
Collaborative Learning for Enhanced Unsupervised Domain Adaptation
Minhee Cho
Hyesong Choi
Hayeon Jo
Dongbo Min
32
1
0
04 Sep 2024
TSAK: Two-Stage Semantic-Aware Knowledge Distillation for Efficient Wearable Modality and Model Optimization in Manufacturing Lines
Hymalai Bello
Daniel Geißler
Sungho Suh
Bo Zhou
Paul Lukowicz
37
2
0
26 Aug 2024
PRG: Prompt-Based Distillation Without Annotation via Proxy Relational Graph
Yijin Xu
Jialun Liu
Hualiang Wei
Wenhui Li
38
0
0
22 Aug 2024
Distil-DCCRN: A Small-footprint DCCRN Leveraging Feature-based Knowledge Distillation in Speech Enhancement
Runduo Han
Weiming Xu
Zihan Zhang
Mingshuai Liu
Lei Xie
35
1
0
08 Aug 2024
How to Train the Teacher Model for Effective Knowledge Distillation
Shayan Mohajer Hamidi
Xizhen Deng
Renhao Tan
Linfeng Ye
Ahmed H. Salamah
40
3
0
25 Jul 2024
CoMoTo: Unpaired Cross-Modal Lesion Distillation Improves Breast Lesion Detection in Tomosynthesis
Muhammad Alberb
Marawan Elbatel
Aya Elgebaly
R. Montoya-del-Angel
Xiaomeng Li
Robert Martí
26
0
0
24 Jul 2024
Generalizing Teacher Networks for Effective Knowledge Distillation Across Student Architectures
Kuluhan Binici
Weiming Wu
Tulika Mitra
35
1
0
22 Jul 2024
Encapsulating Knowledge in One Prompt
Qi Li
Runpeng Yu
Xinchao Wang
VLM
KELM
52
3
0
16 Jul 2024
Relational Representation Distillation
Nikolaos Giakoumoglou
Tania Stathaki
40
0
0
16 Jul 2024
DεpS: Delayed ε-Shrinking for Faster Once-For-All Training
Aditya Annavajjala
Alind Khare
Animesh Agrawal
Igor Fedorov
Hugo Latapie
Myungjin Lee
Alexey Tumanov
CLL
42
0
0
08 Jul 2024
Leveraging Topological Guidance for Improved Knowledge Distillation
Eun Som Jeon
Rahul Khurana
Aishani Pathak
Pavan Turaga
49
0
0
07 Jul 2024
Topological Persistence Guided Knowledge Distillation for Wearable Sensor Data
Eun Som Jeon
Hongjun Choi
A. Shukla
Yuan Wang
Hyunglae Lee
M. Buman
Pavan Turaga
35
3
0
07 Jul 2024
AMD: Automatic Multi-step Distillation of Large-scale Vision Models
Cheng Han
Qifan Wang
S. Dianat
Majid Rabbani
Raghuveer M. Rao
Yi Fang
Qiang Guan
Lifu Huang
Dongfang Liu
VLM
41
4
0
05 Jul 2024
1
2
3
4
5
6
7
Next