ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2103.00053
  4. Cited By
PURSUhInT: In Search of Informative Hint Points Based on Layer
  Clustering for Knowledge Distillation

PURSUhInT: In Search of Informative Hint Points Based on Layer Clustering for Knowledge Distillation

26 February 2021
Reyhan Kevser Keser
Aydin Ayanzadeh
O. A. Aghdam
Çaglar Kilcioglu
B. U. Toreyin
N. K. Üre
ArXivPDFHTML

Papers citing "PURSUhInT: In Search of Informative Hint Points Based on Layer Clustering for Knowledge Distillation"

24 / 24 papers shown
Title
Decoupled Knowledge Distillation
Decoupled Knowledge Distillation
Borui Zhao
Quan Cui
Renjie Song
Yiyu Qiu
Jiajun Liang
56
538
0
16 Mar 2022
Co-advise: Cross Inductive Bias Distillation
Co-advise: Cross Inductive Bias Distillation
Sucheng Ren
Zhengqi Gao
Tianyu Hua
Zihui Xue
Yonglong Tian
Shengfeng He
Hang Zhao
67
52
0
23 Jun 2021
Distilling Knowledge via Knowledge Review
Distilling Knowledge via Knowledge Review
Pengguang Chen
Shu Liu
Hengshuang Zhao
Jiaya Jia
181
434
0
19 Apr 2021
Distilling a Powerful Student Model via Online Knowledge Distillation
Distilling a Powerful Student Model via Online Knowledge Distillation
Shaojie Li
Mingbao Lin
Yan Wang
Yongjian Wu
Yonghong Tian
Ling Shao
Rongrong Ji
FedML
69
47
0
26 Mar 2021
Student Network Learning via Evolutionary Knowledge Distillation
Student Network Learning via Evolutionary Knowledge Distillation
Kangkai Zhang
Chunhui Zhang
Shikun Li
Dan Zeng
Shiming Ge
48
83
0
23 Mar 2021
Adaptive Multi-Teacher Multi-level Knowledge Distillation
Adaptive Multi-Teacher Multi-level Knowledge Distillation
Yuang Liu
Wei Zhang
Jun Wang
59
157
0
06 Mar 2021
Show, Attend and Distill:Knowledge Distillation via Attention-based
  Feature Matching
Show, Attend and Distill:Knowledge Distillation via Attention-based Feature Matching
Mingi Ji
Byeongho Heo
Sungrae Park
93
147
0
05 Feb 2021
Rethinking Soft Labels for Knowledge Distillation: A Bias-Variance
  Tradeoff Perspective
Rethinking Soft Labels for Knowledge Distillation: A Bias-Variance Tradeoff Perspective
Helong Zhou
Liangchen Song
Jiajie Chen
Ye Zhou
Guoli Wang
Junsong Yuan
Qian Zhang
67
173
0
01 Feb 2021
The State of Knowledge Distillation for Classification
The State of Knowledge Distillation for Classification
Fabian Ruffy
K. Chahal
62
20
0
20 Dec 2019
Contrastive Representation Distillation
Contrastive Representation Distillation
Yonglong Tian
Dilip Krishnan
Phillip Isola
141
1,045
0
23 Oct 2019
Similarity-Preserving Knowledge Distillation
Similarity-Preserving Knowledge Distillation
Frederick Tung
Greg Mori
113
973
0
23 Jul 2019
Deep Learning in Video Multi-Object Tracking: A Survey
Deep Learning in Video Multi-Object Tracking: A Survey
Gioele Ciaparrone
Francisco Luque Sánchez
Siham Tabik
L. Troiano
R. Tagliaferri
Francisco Herrera
VOT
53
570
0
18 Jul 2019
Searching for MobileNetV3
Searching for MobileNetV3
Andrew G. Howard
Mark Sandler
Grace Chu
Liang-Chieh Chen
Bo Chen
...
Yukun Zhu
Ruoming Pang
Vijay Vasudevan
Quoc V. Le
Hartwig Adam
327
6,737
0
06 May 2019
A Comprehensive Overhaul of Feature Distillation
A Comprehensive Overhaul of Feature Distillation
Byeongho Heo
Jeesoo Kim
Sangdoo Yun
Hyojin Park
Nojun Kwak
J. Choi
76
574
0
03 Apr 2019
Correlation Congruence for Knowledge Distillation
Correlation Congruence for Knowledge Distillation
Baoyun Peng
Xiao Jin
Jiaheng Liu
Shunfeng Zhou
Yichao Wu
Yu Liu
Dongsheng Li
Zhaoning Zhang
86
510
0
03 Apr 2019
Knowledge Distillation by On-the-Fly Native Ensemble
Knowledge Distillation by On-the-Fly Native Ensemble
Xu Lan
Xiatian Zhu
S. Gong
275
476
0
12 Jun 2018
Efficient Processing of Deep Neural Networks: A Tutorial and Survey
Efficient Processing of Deep Neural Networks: A Tutorial and Survey
Vivienne Sze
Yu-hsin Chen
Tien-Ju Yang
J. Emer
AAML
3DV
113
3,013
0
27 Mar 2017
Paying More Attention to Attention: Improving the Performance of
  Convolutional Neural Networks via Attention Transfer
Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer
Sergey Zagoruyko
N. Komodakis
113
2,569
0
12 Dec 2016
Wide Residual Networks
Wide Residual Networks
Sergey Zagoruyko
N. Komodakis
318
7,971
0
23 May 2016
SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB
  model size
SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size
F. Iandola
Song Han
Matthew W. Moskewicz
Khalid Ashraf
W. Dally
Kurt Keutzer
139
7,465
0
24 Feb 2016
Inception-v4, Inception-ResNet and the Impact of Residual Connections on
  Learning
Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning
Christian Szegedy
Sergey Ioffe
Vincent Vanhoucke
Alexander A. Alemi
361
14,223
0
23 Feb 2016
Deep Compression: Compressing Deep Neural Networks with Pruning, Trained
  Quantization and Huffman Coding
Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding
Song Han
Huizi Mao
W. Dally
3DGS
239
8,821
0
01 Oct 2015
FitNets: Hints for Thin Deep Nets
FitNets: Hints for Thin Deep Nets
Adriana Romero
Nicolas Ballas
Samira Ebrahimi Kahou
Antoine Chassang
C. Gatta
Yoshua Bengio
FedML
282
3,870
0
19 Dec 2014
How transferable are features in deep neural networks?
How transferable are features in deep neural networks?
J. Yosinski
Jeff Clune
Yoshua Bengio
Hod Lipson
OOD
204
8,321
0
06 Nov 2014
1