ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.05525
  4. Cited By
Knowledge Distillation: A Survey
v1v2v3v4v5v6v7 (latest)

Knowledge Distillation: A Survey

9 June 2020
Jianping Gou
B. Yu
Stephen J. Maybank
Dacheng Tao
    VLM
ArXiv (abs)PDFHTML

Papers citing "Knowledge Distillation: A Survey"

50 / 328 papers shown
Title
Provable Weak-to-Strong Generalization via Benign Overfitting
Provable Weak-to-Strong Generalization via Benign Overfitting
David X. Wu
A. Sahai
137
10
0
06 Oct 2024
Efficient Low-Resolution Face Recognition via Bridge Distillation
Efficient Low-Resolution Face Recognition via Bridge Distillation
Shiming Ge
Shengwei Zhao
Chenyu Li
Yu Zhang
Jia Li
CVBM
88
59
0
18 Sep 2024
What is the Role of Small Models in the LLM Era: A Survey
What is the Role of Small Models in the LLM Era: A Survey
Lihu Chen
Gaël Varoquaux
ALM
201
32
0
10 Sep 2024
DKDM: Data-Free Knowledge Distillation for Diffusion Models with Any Architecture
DKDM: Data-Free Knowledge Distillation for Diffusion Models with Any Architecture
Qianlong Xiang
Miao Zhang
Yuzhang Shang
Jianlong Wu
Yan Yan
Liqiang Nie
DiffM
112
10
0
05 Sep 2024
A Review of Pseudo-Labeling for Computer Vision
A Review of Pseudo-Labeling for Computer Vision
Patrick Kage
Jay C. Rothenberger
Pavlos Andreadis
Dimitrios I. Diochnos
VLM
96
7
0
13 Aug 2024
On the Workflows and Smells of Leaderboard Operations (LBOps): An Exploratory Study of Foundation Model Leaderboards
On the Workflows and Smells of Leaderboard Operations (LBOps): An Exploratory Study of Foundation Model Leaderboards
Zhimin Zhao
A. A. Bangash
F. Côgo
Bram Adams
Ahmed E. Hassan
141
1
0
04 Jul 2024
Direct Preference Knowledge Distillation for Large Language Models
Direct Preference Knowledge Distillation for Large Language Models
Yixing Li
Yuxian Gu
Li Dong
Dequan Wang
Yu Cheng
Furu Wei
98
8
0
28 Jun 2024
A Label is Worth a Thousand Images in Dataset Distillation
A Label is Worth a Thousand Images in Dataset Distillation
Tian Qin
Zhiwei Deng
David Alvarez-Melis
DD
167
13
0
15 Jun 2024
DistilDoc: Knowledge Distillation for Visually-Rich Document Applications
DistilDoc: Knowledge Distillation for Visually-Rich Document Applications
Jordy Van Landeghem
Subhajit Maity
Ayan Banerjee
Matthew Blaschko
Marie-Francine Moens
Josep Lladós
Sanket Biswas
114
2
0
12 Jun 2024
Tiny models from tiny data: Textual and null-text inversion for few-shot distillation
Tiny models from tiny data: Textual and null-text inversion for few-shot distillation
Erik Landolsi
Fredrik Kahl
DiffM
102
1
0
05 Jun 2024
Pursuing Feature Separation based on Neural Collapse for Out-of-Distribution Detection
Pursuing Feature Separation based on Neural Collapse for Out-of-Distribution Detection
Yingwen Wu
Ruiji Yu
Xinwen Cheng
Zhengbao He
Xiaolin Huang
OODD
100
4
0
28 May 2024
ModelShield: Adaptive and Robust Watermark against Model Extraction Attack
ModelShield: Adaptive and Robust Watermark against Model Extraction Attack
Kaiyi Pang
Tao Qi
Chuhan Wu
Minhao Bai
Minghu Jiang
Yongfeng Huang
AAMLWaLM
113
4
0
03 May 2024
A Novel Spike Transformer Network for Depth Estimation from Event Cameras via Cross-modality Knowledge Distillation
A Novel Spike Transformer Network for Depth Estimation from Event Cameras via Cross-modality Knowledge Distillation
Xin Zhang
Liangxiu Han
Tam Sobeih
Lianghao Han
Darren Dancey
176
2
0
26 Apr 2024
CKD: Contrastive Knowledge Distillation from A Sample-wise Perspective
CKD: Contrastive Knowledge Distillation from A Sample-wise Perspective
Wencheng Zhu
Xin Zhou
Pengfei Zhu
Yu Wang
Qinghua Hu
VLM
121
1
0
22 Apr 2024
CMAT: A Multi-Agent Collaboration Tuning Framework for Enhancing Small Language Models
CMAT: A Multi-Agent Collaboration Tuning Framework for Enhancing Small Language Models
Xuechen Liang
Meiling Tao
Yinghui Xia
Yiting Xie
Jun Wang
JingSong Yang
LLMAG
133
13
0
02 Apr 2024
A Comprehensive Survey on Process-Oriented Automatic Text Summarization with Exploration of LLM-Based Methods
A Comprehensive Survey on Process-Oriented Automatic Text Summarization with Exploration of LLM-Based Methods
Hanlei Jin
Yang Zhang
Dan Meng
Jun Wang
Jinghua Tan
237
96
0
05 Mar 2024
Large Language Models: A Survey
Large Language Models: A Survey
Shervin Minaee
Tomas Mikolov
Narjes Nikzad
M. Asgari-Chenaghlu
R. Socher
Xavier Amatriain
Jianfeng Gao
ALMLM&MAELM
208
417
0
09 Feb 2024
Maximizing Discrimination Capability of Knowledge Distillation with Energy Function
Maximizing Discrimination Capability of Knowledge Distillation with Energy Function
Seonghak Kim
Gyeongdo Ham
Suin Lee
Donggon Jang
Daeshik Kim
210
4
0
24 Nov 2023
Bridging Classical and Quantum Machine Learning: Knowledge Transfer From Classical to Quantum Neural Networks Using Knowledge Distillation
Bridging Classical and Quantum Machine Learning: Knowledge Transfer From Classical to Quantum Neural Networks Using Knowledge Distillation
Mohammad Junayed Hasan
M.R.C. Mahdy
79
3
0
23 Nov 2023
Robustness-Reinforced Knowledge Distillation with Correlation Distance and Network Pruning
Robustness-Reinforced Knowledge Distillation with Correlation Distance and Network Pruning
Seonghak Kim
Gyeongdo Ham
Yucheol Cho
Daeshik Kim
95
4
0
23 Nov 2023
Neural Lattice Reduction: A Self-Supervised Geometric Deep Learning Approach
Neural Lattice Reduction: A Self-Supervised Geometric Deep Learning Approach
Giovanni Luca Marchetti
Gabriele Cesa
Kumar Pratik
Arash Behboodi
131
2
0
14 Nov 2023
Edge-aware Feature Aggregation Network for Polyp Segmentation
Edge-aware Feature Aggregation Network for Polyp Segmentation
Tao Zhou
Yizhe Zhang
Geng Chen
Yi Zhou
Ye Wu
Deng-Ping Fan
214
8
0
19 Sep 2023
Unveiling the frontiers of deep learning: innovations shaping diverse domains
Unveiling the frontiers of deep learning: innovations shaping diverse domains
Shams Forruque Ahmed
Md. Sakib Bin Alam
Maliha Kabir
Shaila Afrin
Sabiha Jannat Rafa
Aanushka Mehjabin
Amir H. Gandomi
AI4CE
115
2
0
06 Sep 2023
Efficiency is Not Enough: A Critical Perspective of Environmentally Sustainable AI
Efficiency is Not Enough: A Critical Perspective of Environmentally Sustainable AI
Dustin Wright
Christian Igel
Gabrielle Samuel
Raghavendra Selvan
96
15
0
05 Sep 2023
Similarity of Neural Network Models: A Survey of Functional and Representational Measures
Similarity of Neural Network Models: A Survey of Functional and Representational Measures
Max Klabunde
Tobias Schumacher
M. Strohmaier
Florian Lemmerich
149
73
0
10 May 2023
ERSAM: Neural Architecture Search For Energy-Efficient and Real-Time Social Ambiance Measurement
ERSAM: Neural Architecture Search For Energy-Efficient and Real-Time Social Ambiance Measurement
Chaojian Li
Wenwan Chen
Jiayi Yuan
Yingyan Lin
Ashutosh Sabharwal
82
0
0
19 Mar 2023
Deep Learning for Cross-Domain Few-Shot Visual Recognition: A Survey
Deep Learning for Cross-Domain Few-Shot Visual Recognition: A Survey
Huali Xu
Shuaifeng Zhi
Shuzhou Sun
Vishal M. Patel
Li Liu
109
14
0
15 Mar 2023
Towards Understanding Knowledge Distillation
Towards Understanding Knowledge Distillation
Mary Phuong
Christoph H. Lampert
67
322
0
27 May 2021
Adaptive Multi-Teacher Multi-level Knowledge Distillation
Adaptive Multi-Teacher Multi-level Knowledge Distillation
Yuang Liu
Wei Zhang
Jun Wang
70
157
0
06 Mar 2021
Collaborative Teacher-Student Learning via Multiple Knowledge Transfer
Collaborative Teacher-Student Learning via Multiple Knowledge Transfer
Liyuan Sun
Jianping Gou
Baosheng Yu
Lan Du
Dacheng Tao
54
11
0
21 Jan 2021
ALP-KD: Attention-Based Layer Projection for Knowledge Distillation
ALP-KD: Attention-Based Layer Projection for Knowledge Distillation
Peyman Passban
Yimeng Wu
Mehdi Rezagholizadeh
Qun Liu
67
122
0
27 Dec 2020
Learning Light-Weight Translation Models from Deep Transformer
Learning Light-Weight Translation Models from Deep Transformer
Bei Li
Ziyang Wang
Hui Liu
Quan Du
Tong Xiao
Chunliang Zhang
Jingbo Zhu
VLM
152
40
0
27 Dec 2020
Future-Guided Incremental Transformer for Simultaneous Translation
Future-Guided Incremental Transformer for Simultaneous Translation
Shaolei Zhang
Yang Feng
Liangyou Li
CLL
64
42
0
23 Dec 2020
Diverse Knowledge Distillation for End-to-End Person Search
Diverse Knowledge Distillation for End-to-End Person Search
Xinyu Zhang
Xinlong Wang
Jiawang Bian
Chunhua Shen
Mingyu You
FedML
75
36
0
21 Dec 2020
LRC-BERT: Latent-representation Contrastive Knowledge Distillation for
  Natural Language Understanding
LRC-BERT: Latent-representation Contrastive Knowledge Distillation for Natural Language Understanding
Hao Fu
Shaojun Zhou
Qihong Yang
Junjie Tang
Guiquan Liu
Kaikui Liu
Xiaolong Li
97
60
0
14 Dec 2020
Reinforced Multi-Teacher Selection for Knowledge Distillation
Reinforced Multi-Teacher Selection for Knowledge Distillation
Fei Yuan
Linjun Shou
J. Pei
Wutao Lin
Ming Gong
Yan Fu
Daxin Jiang
56
122
0
11 Dec 2020
Progressive Network Grafting for Few-Shot Knowledge Distillation
Progressive Network Grafting for Few-Shot Knowledge Distillation
Chengchao Shen
Xinchao Wang
Youtan Yin
Mingli Song
Sihui Luo
Xiuming Zhang
34
49
0
09 Dec 2020
Robust Domain Randomised Reinforcement Learning through Peer-to-Peer
  Distillation
Robust Domain Randomised Reinforcement Learning through Peer-to-Peer Distillation
Chenyang Zhao
Timothy M. Hospedales
OOD
45
16
0
09 Dec 2020
Cross-Layer Distillation with Semantic Calibration
Cross-Layer Distillation with Semantic Calibration
Defang Chen
Jian-Ping Mei
Yuan Zhang
Can Wang
Yan Feng
Chun-Yen Chen
FedML
94
298
0
06 Dec 2020
Effectiveness of Arbitrary Transfer Sets for Data-free Knowledge
  Distillation
Effectiveness of Arbitrary Transfer Sets for Data-free Knowledge Distillation
Gaurav Kumar Nayak
Konda Reddy Mopuri
Anirban Chakraborty
47
19
0
18 Nov 2020
Online Ensemble Model Compression using Knowledge Distillation
Online Ensemble Model Compression using Knowledge Distillation
Devesh Walawalkar
Zhiqiang Shen
Marios Savvides
41
51
0
15 Nov 2020
Federated Knowledge Distillation
Federated Knowledge Distillation
Hyowoon Seo
Jihong Park
Seungeun Oh
M. Bennis
Seong-Lyun Kim
FedML
88
92
0
04 Nov 2020
Cream of the Crop: Distilling Prioritized Paths For One-Shot Neural
  Architecture Search
Cream of the Crop: Distilling Prioritized Paths For One-Shot Neural Architecture Search
Houwen Peng
Hao Du
Hongyuan Yu
Qi Li
Jing Liao
Jianlong Fu
77
67
0
29 Oct 2020
Comprehensive Attention Self-Distillation for Weakly-Supervised Object
  Detection
Comprehensive Attention Self-Distillation for Weakly-Supervised Object Detection
Zeyi Huang
Yang Zou
V. Bhagavatula
Dong Huang
WSOD
66
123
0
22 Oct 2020
Knowledge Distillation in Wide Neural Networks: Risk Bound, Data
  Efficiency and Imperfect Teacher
Knowledge Distillation in Wide Neural Networks: Risk Bound, Data Efficiency and Imperfect Teacher
Guangda Ji
Zhanxing Zhu
88
44
0
20 Oct 2020
Knowledge Transfer in Multi-Task Deep Reinforcement Learning for
  Continuous Control
Knowledge Transfer in Multi-Task Deep Reinforcement Learning for Continuous Control
Zhiyuan Xu
Kun Wu
Zhengping Che
Jian Tang
Jieping Ye
CLLOffRL
53
49
0
15 Oct 2020
Stochastic Precision Ensemble: Self-Knowledge Distillation for Quantized
  Deep Neural Networks
Stochastic Precision Ensemble: Self-Knowledge Distillation for Quantized Deep Neural Networks
Yoonho Boo
Sungho Shin
Jungwook Choi
Wonyong Sung
MQ
62
30
0
30 Sep 2020
Discriminability Distillation in Group Representation Learning
Discriminability Distillation in Group Representation Learning
Manyuan Zhang
Guanglu Song
Hang Zhou
Yu Liu
FedML
73
18
0
25 Aug 2020
Matching Guided Distillation
Matching Guided Distillation
Kaiyu Yue
Jiangfan Deng
Feng Zhou
49
49
0
23 Aug 2020
Knowledge Transfer via Dense Cross-Layer Mutual-Distillation
Knowledge Transfer via Dense Cross-Layer Mutual-Distillation
Anbang Yao
Dawei Sun
40
55
0
18 Aug 2020
Previous
1234567
Next