ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.09183
  4. Cited By
Lightweight Self-Knowledge Distillation with Multi-source Information
  Fusion

Lightweight Self-Knowledge Distillation with Multi-source Information Fusion

16 May 2023
Xucong Wang
Pengchao Han
Lei Guo
ArXivPDFHTML

Papers citing "Lightweight Self-Knowledge Distillation with Multi-source Information Fusion"

30 / 30 papers shown
Title
Curriculum Temperature for Knowledge Distillation
Curriculum Temperature for Knowledge Distillation
Zheng Li
Xiang Li
Lingfeng Yang
Borui Zhao
Renjie Song
Lei Luo
Jun Yu Li
Jian Yang
58
138
0
29 Nov 2022
AI-KD: Adversarial learning and Implicit regularization for
  self-Knowledge Distillation
AI-KD: Adversarial learning and Implicit regularization for self-Knowledge Distillation
Hyungmin Kim
Sungho Suh
Sunghyun Baek
Daehwan Kim
Daun Jeong
Hansang Cho
Junmo Kim
41
5
0
20 Nov 2022
Distilling the Undistillable: Learning from a Nasty Teacher
Distilling the Undistillable: Learning from a Nasty Teacher
Surgan Jandial
Yash Khasbage
Arghya Pal
V. Balasubramanian
Balaji Krishnamurthy
106
6
0
21 Oct 2022
Asymmetric Temperature Scaling Makes Larger Networks Teach Well Again
Asymmetric Temperature Scaling Makes Larger Networks Teach Well Again
Xin-Chun Li
Wenxuan Fan
Shaoming Song
Yinchuan Li
Bingshuai Li
Yunfeng Shao
De-Chuan Zhan
73
30
0
10 Oct 2022
Efficient One Pass Self-distillation with Zipf's Label Smoothing
Efficient One Pass Self-distillation with Zipf's Label Smoothing
Jiajun Liang
Linze Li
Z. Bing
Borui Zhao
Yao Tang
Bo Lin
Haoqiang Fan
47
19
0
26 Jul 2022
Self-Distillation from the Last Mini-Batch for Consistency
  Regularization
Self-Distillation from the Last Mini-Batch for Consistency Regularization
Yiqing Shen
Liwu Xu
Yuzhe Yang
Yaqian Li
Yandong Guo
81
63
0
30 Mar 2022
Decoupled Knowledge Distillation
Decoupled Knowledge Distillation
Borui Zhao
Quan Cui
Renjie Song
Yiyu Qiu
Jiajun Liang
60
541
0
16 Mar 2022
Robust and Resource-Efficient Data-Free Knowledge Distillation by
  Generative Pseudo Replay
Robust and Resource-Efficient Data-Free Knowledge Distillation by Generative Pseudo Replay
Kuluhan Binici
Shivam Aggarwal
N. Pham
K. Leman
T. Mitra
TTA
51
47
0
09 Jan 2022
Undistillable: Making A Nasty Teacher That CANNOT teach students
Undistillable: Making A Nasty Teacher That CANNOT teach students
Haoyu Ma
Tianlong Chen
Ting-Kuei Hu
Chenyu You
Xiaohui Xie
Zhangyang Wang
64
42
0
16 May 2021
Distilling a Powerful Student Model via Online Knowledge Distillation
Distilling a Powerful Student Model via Online Knowledge Distillation
Shaojie Li
Mingbao Lin
Yan Wang
Yongjian Wu
Yonghong Tian
Ling Shao
Rongrong Ji
FedML
72
47
0
26 Mar 2021
Refine Myself by Teaching Myself: Feature Refinement via Self-Knowledge
  Distillation
Refine Myself by Teaching Myself: Feature Refinement via Self-Knowledge Distillation
Mingi Ji
Seungjae Shin
Seunghyun Hwang
Gibeom Park
Il-Chul Moon
32
123
0
15 Mar 2021
Delving Deep into Label Smoothing
Delving Deep into Label Smoothing
Chang-Bin Zhang
Peng-Tao Jiang
Qibin Hou
Yunchao Wei
Qi Han
Zhen Li
Ming-Ming Cheng
96
212
0
25 Nov 2020
Self-Knowledge Distillation with Progressive Refinement of Targets
Self-Knowledge Distillation with Progressive Refinement of Targets
Kyungyul Kim
Byeongmoon Ji
Doyoung Yoon
Sangheum Hwang
ODL
69
182
0
22 Jun 2020
Regularizing Class-wise Predictions via Self-knowledge Distillation
Regularizing Class-wise Predictions via Self-knowledge Distillation
Sukmin Yun
Jongjin Park
Kimin Lee
Jinwoo Shin
54
278
0
31 Mar 2020
Be Your Own Teacher: Improve the Performance of Convolutional Neural
  Networks via Self Distillation
Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distillation
Linfeng Zhang
Jiebo Song
Anni Gao
Jingwei Chen
Chenglong Bao
Kaisheng Ma
FedML
60
860
0
17 May 2019
A Comprehensive Overhaul of Feature Distillation
A Comprehensive Overhaul of Feature Distillation
Byeongho Heo
Jeesoo Kim
Sangdoo Yun
Hyojin Park
Nojun Kwak
J. Choi
76
579
0
03 Apr 2019
Data-Free Learning of Student Networks
Data-Free Learning of Student Networks
Hanting Chen
Yunhe Wang
Chang Xu
Zhaohui Yang
Chuanjian Liu
Boxin Shi
Chunjing Xu
Chao Xu
Qi Tian
FedML
53
369
0
02 Apr 2019
Snapshot Distillation: Teacher-Student Optimization in One Generation
Snapshot Distillation: Teacher-Student Optimization in One Generation
Chenglin Yang
Lingxi Xie
Chi Su
Alan Yuille
64
193
0
01 Dec 2018
Private Model Compression via Knowledge Distillation
Private Model Compression via Knowledge Distillation
Ji Wang
Weidong Bao
Lichao Sun
Xiaomin Zhu
Bokai Cao
Philip S. Yu
FedML
51
118
0
13 Nov 2018
Born Again Neural Networks
Born Again Neural Networks
Tommaso Furlanello
Zachary Chase Lipton
Michael Tschannen
Laurent Itti
Anima Anandkumar
68
1,030
0
12 May 2018
Paraphrasing Complex Network: Network Compression via Factor Transfer
Paraphrasing Complex Network: Network Compression via Factor Transfer
Jangho Kim
Seonguk Park
Nojun Kwak
66
550
0
14 Feb 2018
Efficient Algorithms for t-distributed Stochastic Neighborhood Embedding
Efficient Algorithms for t-distributed Stochastic Neighborhood Embedding
G. Linderman
M. Rachh
J. Hoskins
Stefan Steinerberger
Y. Kluger
63
435
0
25 Dec 2017
Improved Regularization of Convolutional Neural Networks with Cutout
Improved Regularization of Convolutional Neural Networks with Cutout
Terrance Devries
Graham W. Taylor
109
3,764
0
15 Aug 2017
Aggregated Residual Transformations for Deep Neural Networks
Aggregated Residual Transformations for Deep Neural Networks
Saining Xie
Ross B. Girshick
Piotr Dollár
Zhuowen Tu
Kaiming He
506
10,318
0
16 Nov 2016
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based
  Localization
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization
Ramprasaath R. Selvaraju
Michael Cogswell
Abhishek Das
Ramakrishna Vedantam
Devi Parikh
Dhruv Batra
FAtt
282
19,981
0
07 Oct 2016
Densely Connected Convolutional Networks
Densely Connected Convolutional Networks
Gao Huang
Zhuang Liu
Laurens van der Maaten
Kilian Q. Weinberger
PINN
3DV
762
36,781
0
25 Aug 2016
Deep Residual Learning for Image Recognition
Deep Residual Learning for Image Recognition
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
MedIm
2.2K
193,814
0
10 Dec 2015
Rethinking the Inception Architecture for Computer Vision
Rethinking the Inception Architecture for Computer Vision
Christian Szegedy
Vincent Vanhoucke
Sergey Ioffe
Jonathon Shlens
Z. Wojna
3DV
BDL
872
27,350
0
02 Dec 2015
Distilling the Knowledge in a Neural Network
Distilling the Knowledge in a Neural Network
Geoffrey E. Hinton
Oriol Vinyals
J. Dean
FedML
340
19,634
0
09 Mar 2015
FitNets: Hints for Thin Deep Nets
FitNets: Hints for Thin Deep Nets
Adriana Romero
Nicolas Ballas
Samira Ebrahimi Kahou
Antoine Chassang
C. Gatta
Yoshua Bengio
FedML
298
3,882
0
19 Dec 2014
1