ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.04719
  4. Cited By
ResKD: Residual-Guided Knowledge Distillation

ResKD: Residual-Guided Knowledge Distillation

8 June 2020
Xuewei Li
Songyuan Li
Bourahla Omar
Fei Wu
Xi Li
ArXivPDFHTML

Papers citing "ResKD: Residual-Guided Knowledge Distillation"

18 / 18 papers shown
Title
CKD: Contrastive Knowledge Distillation from A Sample-wise Perspective
CKD: Contrastive Knowledge Distillation from A Sample-wise Perspective
Wencheng Zhu
Xin Zhou
Pengfei Zhu
Yu Wang
Qinghua Hu
VLM
83
1
0
22 Apr 2024
Robust Face Alignment by Multi-order High-precision Hourglass Network
Robust Face Alignment by Multi-order High-precision Hourglass Network
Jun Wan
Zhihui Lai
Jun Liu
Jie Zhou
C. Gao
3DH
CVBM
28
43
0
17 Oct 2020
Knowledge Distillation Meets Self-Supervision
Knowledge Distillation Meets Self-Supervision
Guodong Xu
Ziwei Liu
Xiaoxiao Li
Chen Change Loy
FedML
64
281
0
12 Jun 2020
Search for Better Students to Learn Distilled Knowledge
Search for Better Students to Learn Distilled Knowledge
Jindong Gu
Volker Tresp
28
19
0
30 Jan 2020
Towards Oracle Knowledge Distillation with Neural Architecture Search
Towards Oracle Knowledge Distillation with Neural Architecture Search
Minsoo Kang
Jonghwan Mun
Bohyung Han
FedML
63
44
0
29 Nov 2019
Self-training with Noisy Student improves ImageNet classification
Self-training with Noisy Student improves ImageNet classification
Qizhe Xie
Minh-Thang Luong
Eduard H. Hovy
Quoc V. Le
NoLa
183
2,375
0
11 Nov 2019
On the Efficacy of Knowledge Distillation
On the Efficacy of Knowledge Distillation
Ligang He
Rui Mao
68
603
0
03 Oct 2019
Similarity-Preserving Knowledge Distillation
Similarity-Preserving Knowledge Distillation
Frederick Tung
Greg Mori
80
963
0
23 Jul 2019
Ultrafast Video Attention Prediction with Coupled Knowledge Distillation
Ultrafast Video Attention Prediction with Coupled Knowledge Distillation
K. Fu
Peipei Shi
Yafei Song
Shiming Ge
Xiangju Lu
Jia Li
31
10
0
09 Apr 2019
ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture
  Design
ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design
Ningning Ma
Xiangyu Zhang
Haitao Zheng
Jian Sun
114
4,957
0
30 Jul 2018
DARTS: Differentiable Architecture Search
DARTS: Differentiable Architecture Search
Hanxiao Liu
Karen Simonyan
Yiming Yang
149
4,326
0
24 Jun 2018
SMASH: One-Shot Model Architecture Search through HyperNetworks
SMASH: One-Shot Model Architecture Search through HyperNetworks
Andrew Brock
Theodore Lim
J. Ritchie
Nick Weston
79
762
0
17 Aug 2017
Learning Transferable Architectures for Scalable Image Recognition
Learning Transferable Architectures for Scalable Image Recognition
Barret Zoph
Vijay Vasudevan
Jonathon Shlens
Quoc V. Le
129
5,577
0
21 Jul 2017
Paying More Attention to Attention: Improving the Performance of
  Convolutional Neural Networks via Attention Transfer
Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer
Sergey Zagoruyko
N. Komodakis
86
2,561
0
12 Dec 2016
Layer Normalization
Layer Normalization
Jimmy Lei Ba
J. Kiros
Geoffrey E. Hinton
204
10,412
0
21 Jul 2016
FitNets: Hints for Thin Deep Nets
FitNets: Hints for Thin Deep Nets
Adriana Romero
Nicolas Ballas
Samira Ebrahimi Kahou
Antoine Chassang
C. Gatta
Yoshua Bengio
FedML
207
3,862
0
19 Dec 2014
Horizontal and Vertical Ensemble with Deep Representation for
  Classification
Horizontal and Vertical Ensemble with Deep Representation for Classification
Jingjing Xie
Bing Xu
Chuang Zhang
SSL
75
76
0
12 Jun 2013
Popular Ensemble Methods: An Empirical Study
Popular Ensemble Methods: An Empirical Study
R. Maclin
D. Opitz
102
2,965
0
01 Jun 2011
1