ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2103.08273
  4. Cited By
Refine Myself by Teaching Myself: Feature Refinement via Self-Knowledge
  Distillation

Refine Myself by Teaching Myself: Feature Refinement via Self-Knowledge Distillation

15 March 2021
Mingi Ji
Seungjae Shin
Seunghyun Hwang
Gibeom Park
Il-Chul Moon
ArXivPDFHTML

Papers citing "Refine Myself by Teaching Myself: Feature Refinement via Self-Knowledge Distillation"

13 / 13 papers shown
Title
Continuous Sign Language Recognition Based on Motor attention mechanism
  and frame-level Self-distillation
Continuous Sign Language Recognition Based on Motor attention mechanism and frame-level Self-distillation
Qidan Zhu
Jing Li
Fei Yuan
Quan Gan
SLR
53
3
0
29 Feb 2024
GasMono: Geometry-Aided Self-Supervised Monocular Depth Estimation for
  Indoor Scenes
GasMono: Geometry-Aided Self-Supervised Monocular Depth Estimation for Indoor Scenes
Chaoqiang Zhao
Matteo Poggi
Fabio Tosi
Lei Zhou
Qiyu Sun
Yang Tang
S. Mattoccia
MDE
34
14
0
26 Sep 2023
From Knowledge Distillation to Self-Knowledge Distillation: A Unified
  Approach with Normalized Loss and Customized Soft Labels
From Knowledge Distillation to Self-Knowledge Distillation: A Unified Approach with Normalized Loss and Customized Soft Labels
Zhendong Yang
Ailing Zeng
Zhe Li
Tianke Zhang
Chun Yuan
Yu Li
29
72
0
23 Mar 2023
Guided Hybrid Quantization for Object detection in Multimodal Remote
  Sensing Imagery via One-to-one Self-teaching
Guided Hybrid Quantization for Object detection in Multimodal Remote Sensing Imagery via One-to-one Self-teaching
Jiaqing Zhang
Jie Lei
Weiying Xie
Yunsong Li
Wenxuan Wang
MQ
27
18
0
31 Dec 2022
Curriculum Temperature for Knowledge Distillation
Curriculum Temperature for Knowledge Distillation
Zheng Li
Xiang Li
Lingfeng Yang
Borui Zhao
Renjie Song
Lei Luo
Jun Yu Li
Jian Yang
33
132
0
29 Nov 2022
SADT: Combining Sharpness-Aware Minimization with Self-Distillation for
  Improved Model Generalization
SADT: Combining Sharpness-Aware Minimization with Self-Distillation for Improved Model Generalization
Masud An Nur Islam Fahim
Jani Boutellier
40
0
0
01 Nov 2022
Respecting Transfer Gap in Knowledge Distillation
Respecting Transfer Gap in Knowledge Distillation
Yulei Niu
Long Chen
Chan Zhou
Hanwang Zhang
26
23
0
23 Oct 2022
A Novel Self-Knowledge Distillation Approach with Siamese Representation
  Learning for Action Recognition
A Novel Self-Knowledge Distillation Approach with Siamese Representation Learning for Action Recognition
Duc-Quang Vu
T. Phung
Jia-Ching Wang
27
9
0
03 Sep 2022
FedX: Unsupervised Federated Learning with Cross Knowledge Distillation
FedX: Unsupervised Federated Learning with Cross Knowledge Distillation
Sungwon Han
Sungwon Park
Fangzhao Wu
Sundong Kim
Chuhan Wu
Xing Xie
M. Cha
FedML
32
53
0
19 Jul 2022
Reducing Flipping Errors in Deep Neural Networks
Reducing Flipping Errors in Deep Neural Networks
Xiang Deng
Yun Xiao
Bo Long
Zhongfei Zhang
AAML
38
3
0
16 Mar 2022
MUSE: Feature Self-Distillation with Mutual Information and
  Self-Information
MUSE: Feature Self-Distillation with Mutual Information and Self-Information
Yunpeng Gong
Ye Yu
Gaurav Mittal
Greg Mori
Mei Chen
SSL
30
2
0
25 Oct 2021
Knowledge Distillation by On-the-Fly Native Ensemble
Knowledge Distillation by On-the-Fly Native Ensemble
Xu Lan
Xiatian Zhu
S. Gong
200
473
0
12 Jun 2018
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision
  Applications
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
Andrew G. Howard
Menglong Zhu
Bo Chen
Dmitry Kalenichenko
Weijun Wang
Tobias Weyand
M. Andreetto
Hartwig Adam
3DH
950
20,572
0
17 Apr 2017
1