Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2103.08273
Cited By
Refine Myself by Teaching Myself: Feature Refinement via Self-Knowledge Distillation
15 March 2021
Mingi Ji
Seungjae Shin
Seunghyun Hwang
Gibeom Park
Il-Chul Moon
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Refine Myself by Teaching Myself: Feature Refinement via Self-Knowledge Distillation"
13 / 13 papers shown
Title
Continuous Sign Language Recognition Based on Motor attention mechanism and frame-level Self-distillation
Qidan Zhu
Jing Li
Fei Yuan
Quan Gan
SLR
53
3
0
29 Feb 2024
GasMono: Geometry-Aided Self-Supervised Monocular Depth Estimation for Indoor Scenes
Chaoqiang Zhao
Matteo Poggi
Fabio Tosi
Lei Zhou
Qiyu Sun
Yang Tang
S. Mattoccia
MDE
34
14
0
26 Sep 2023
From Knowledge Distillation to Self-Knowledge Distillation: A Unified Approach with Normalized Loss and Customized Soft Labels
Zhendong Yang
Ailing Zeng
Zhe Li
Tianke Zhang
Chun Yuan
Yu Li
29
72
0
23 Mar 2023
Guided Hybrid Quantization for Object detection in Multimodal Remote Sensing Imagery via One-to-one Self-teaching
Jiaqing Zhang
Jie Lei
Weiying Xie
Yunsong Li
Wenxuan Wang
MQ
27
18
0
31 Dec 2022
Curriculum Temperature for Knowledge Distillation
Zheng Li
Xiang Li
Lingfeng Yang
Borui Zhao
Renjie Song
Lei Luo
Jun Yu Li
Jian Yang
33
132
0
29 Nov 2022
SADT: Combining Sharpness-Aware Minimization with Self-Distillation for Improved Model Generalization
Masud An Nur Islam Fahim
Jani Boutellier
40
0
0
01 Nov 2022
Respecting Transfer Gap in Knowledge Distillation
Yulei Niu
Long Chen
Chan Zhou
Hanwang Zhang
26
23
0
23 Oct 2022
A Novel Self-Knowledge Distillation Approach with Siamese Representation Learning for Action Recognition
Duc-Quang Vu
T. Phung
Jia-Ching Wang
27
9
0
03 Sep 2022
FedX: Unsupervised Federated Learning with Cross Knowledge Distillation
Sungwon Han
Sungwon Park
Fangzhao Wu
Sundong Kim
Chuhan Wu
Xing Xie
M. Cha
FedML
32
53
0
19 Jul 2022
Reducing Flipping Errors in Deep Neural Networks
Xiang Deng
Yun Xiao
Bo Long
Zhongfei Zhang
AAML
38
3
0
16 Mar 2022
MUSE: Feature Self-Distillation with Mutual Information and Self-Information
Yunpeng Gong
Ye Yu
Gaurav Mittal
Greg Mori
Mei Chen
SSL
30
2
0
25 Oct 2021
Knowledge Distillation by On-the-Fly Native Ensemble
Xu Lan
Xiatian Zhu
S. Gong
200
473
0
12 Jun 2018
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
Andrew G. Howard
Menglong Zhu
Bo Chen
Dmitry Kalenichenko
Weijun Wang
Tobias Weyand
M. Andreetto
Hartwig Adam
3DH
950
20,572
0
17 Apr 2017
1