ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1909.11723
  4. Cited By
Revisiting Knowledge Distillation via Label Smoothing Regularization

Revisiting Knowledge Distillation via Label Smoothing Regularization

25 September 2019
Li-xin Yuan
Francis E. H. Tay
Guilin Li
Tao Wang
Jiashi Feng
ArXivPDFHTML

Papers citing "Revisiting Knowledge Distillation via Label Smoothing Regularization"

16 / 16 papers shown
Title
Towards Comparable Knowledge Distillation in Semantic Image Segmentation
Towards Comparable Knowledge Distillation in Semantic Image Segmentation
Onno Niemann
Christopher Vox
Thorben Werner
VLM
25
1
0
07 Sep 2023
SATA: Source Anchoring and Target Alignment Network for Continual Test
  Time Adaptation
SATA: Source Anchoring and Target Alignment Network for Continual Test Time Adaptation
Goirik Chakrabarty
Manogna Sreenivas
Soma Biswas
TTA
41
5
0
20 Apr 2023
In-context Learning Distillation: Transferring Few-shot Learning Ability
  of Pre-trained Language Models
In-context Learning Distillation: Transferring Few-shot Learning Ability of Pre-trained Language Models
Yukun Huang
Yanda Chen
Zhou Yu
Kathleen McKeown
27
30
0
20 Dec 2022
Co-training $2^L$ Submodels for Visual Recognition
Co-training 2L2^L2L Submodels for Visual Recognition
Hugo Touvron
Matthieu Cord
Maxime Oquab
Piotr Bojanowski
Jakob Verbeek
Hervé Jégou
VLM
35
9
0
09 Dec 2022
Boosting Graph Neural Networks via Adaptive Knowledge Distillation
Boosting Graph Neural Networks via Adaptive Knowledge Distillation
Zhichun Guo
Chunhui Zhang
Yujie Fan
Yijun Tian
Chuxu Zhang
Nitesh V. Chawla
21
32
0
12 Oct 2022
Using Knowledge Distillation to improve interpretable models in a retail
  banking context
Using Knowledge Distillation to improve interpretable models in a retail banking context
Maxime Biehler
Mohamed Guermazi
Célim Starck
62
2
0
30 Sep 2022
Rich Feature Distillation with Feature Affinity Module for Efficient
  Image Dehazing
Rich Feature Distillation with Feature Affinity Module for Efficient Image Dehazing
S. J.
Anushri Suresh
Nisha J.S.
V. Gopi
VLM
31
6
0
13 Jul 2022
PrUE: Distilling Knowledge from Sparse Teacher Networks
PrUE: Distilling Knowledge from Sparse Teacher Networks
Shaopu Wang
Xiaojun Chen
Mengzhen Kou
Jinqiao Shi
19
2
0
03 Jul 2022
Towards Accurate Human Pose Estimation in Videos of Crowded Scenes
Towards Accurate Human Pose Estimation in Videos of Crowded Scenes
Li Yuan
Shuning Chang
Xuecheng Nie
Ziyuan Huang
Yichen Zhou
Yupeng Chen
Jiashi Feng
Shuicheng Yan
29
15
0
16 Oct 2020
Densely Guided Knowledge Distillation using Multiple Teacher Assistants
Densely Guided Knowledge Distillation using Multiple Teacher Assistants
Wonchul Son
Jaemin Na
Junyong Choi
Wonjun Hwang
25
111
0
18 Sep 2020
Prime-Aware Adaptive Distillation
Prime-Aware Adaptive Distillation
Youcai Zhang
Zhonghao Lan
Yuchen Dai
Fangao Zeng
Yan Bai
Jie Chang
Yichen Wei
18
40
0
04 Aug 2020
Knowledge Distillation: A Survey
Knowledge Distillation: A Survey
Jianping Gou
B. Yu
Stephen J. Maybank
Dacheng Tao
VLM
19
2,843
0
09 Jun 2020
Self-Distillation as Instance-Specific Label Smoothing
Self-Distillation as Instance-Specific Label Smoothing
Zhilu Zhang
M. Sabuncu
20
116
0
09 Jun 2020
Understanding and Improving Knowledge Distillation
Understanding and Improving Knowledge Distillation
Jiaxi Tang
Rakesh Shivanna
Zhe Zhao
Dong Lin
Anima Singh
Ed H. Chi
Sagar Jain
27
129
0
10 Feb 2020
Preparing Lessons: Improve Knowledge Distillation with Better
  Supervision
Preparing Lessons: Improve Knowledge Distillation with Better Supervision
Tiancheng Wen
Shenqi Lai
Xueming Qian
25
67
0
18 Nov 2019
Large scale distributed neural network training through online
  distillation
Large scale distributed neural network training through online distillation
Rohan Anil
Gabriel Pereyra
Alexandre Passos
Róbert Ormándi
George E. Dahl
Geoffrey E. Hinton
FedML
278
404
0
09 Apr 2018
1