ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1806.04606
  4. Cited By
Knowledge Distillation by On-the-Fly Native Ensemble

Knowledge Distillation by On-the-Fly Native Ensemble

12 June 2018
Xu Lan
Xiatian Zhu
S. Gong
ArXivPDFHTML

Papers citing "Knowledge Distillation by On-the-Fly Native Ensemble"

43 / 93 papers shown
Title
Distill on the Go: Online knowledge distillation in self-supervised
  learning
Distill on the Go: Online knowledge distillation in self-supervised learning
Prashant Shivaram Bhat
Elahe Arani
Bahram Zonooz
SSL
22
28
0
20 Apr 2021
Dive into Ambiguity: Latent Distribution Mining and Pairwise Uncertainty
  Estimation for Facial Expression Recognition
Dive into Ambiguity: Latent Distribution Mining and Pairwise Uncertainty Estimation for Facial Expression Recognition
Jiahui She
Yibo Hu
Hailin Shi
Jun Wang
Qiu Shen
Tao Mei
25
186
0
01 Apr 2021
Distilling a Powerful Student Model via Online Knowledge Distillation
Distilling a Powerful Student Model via Online Knowledge Distillation
Shaojie Li
Mingbao Lin
Yan Wang
Yongjian Wu
Yonghong Tian
Ling Shao
Rongrong Ji
FedML
27
46
0
26 Mar 2021
Student Network Learning via Evolutionary Knowledge Distillation
Student Network Learning via Evolutionary Knowledge Distillation
Kangkai Zhang
Chunhui Zhang
Shikun Li
Dan Zeng
Shiming Ge
22
83
0
23 Mar 2021
Refine Myself by Teaching Myself: Feature Refinement via Self-Knowledge
  Distillation
Refine Myself by Teaching Myself: Feature Refinement via Self-Knowledge Distillation
Mingi Ji
Seungjae Shin
Seunghyun Hwang
Gibeom Park
Il-Chul Moon
13
120
0
15 Mar 2021
Locally Adaptive Label Smoothing for Predictive Churn
Locally Adaptive Label Smoothing for Predictive Churn
Dara Bahri
Heinrich Jiang
NoLa
40
8
0
09 Feb 2021
On the Reproducibility of Neural Network Predictions
On the Reproducibility of Neural Network Predictions
Srinadh Bhojanapalli
Kimberly Wilber
Andreas Veit
A. S. Rawat
Seungyeon Kim
A. Menon
Sanjiv Kumar
29
35
0
05 Feb 2021
DICE: Diversity in Deep Ensembles via Conditional Redundancy Adversarial
  Estimation
DICE: Diversity in Deep Ensembles via Conditional Redundancy Adversarial Estimation
Alexandre Ramé
Matthieu Cord
FedML
50
51
0
14 Jan 2021
Re-labeling ImageNet: from Single to Multi-Labels, from Global to
  Localized Labels
Re-labeling ImageNet: from Single to Multi-Labels, from Global to Localized Labels
Sangdoo Yun
Seong Joon Oh
Byeongho Heo
Dongyoon Han
Junsuk Choe
Sanghyuk Chun
414
142
0
13 Jan 2021
Towards Understanding Ensemble, Knowledge Distillation and
  Self-Distillation in Deep Learning
Towards Understanding Ensemble, Knowledge Distillation and Self-Distillation in Deep Learning
Zeyuan Allen-Zhu
Yuanzhi Li
FedML
60
355
0
17 Dec 2020
Data-Free Model Extraction
Data-Free Model Extraction
Jean-Baptiste Truong
Pratyush Maini
R. Walls
Nicolas Papernot
MIACV
15
181
0
30 Nov 2020
Distilling Knowledge by Mimicking Features
Distilling Knowledge by Mimicking Features
G. Wang
Yifan Ge
Jianxin Wu
17
33
0
03 Nov 2020
Anti-Distillation: Improving reproducibility of deep networks
Anti-Distillation: Improving reproducibility of deep networks
G. Shamir
Lorenzo Coviello
42
20
0
19 Oct 2020
Densely Guided Knowledge Distillation using Multiple Teacher Assistants
Densely Guided Knowledge Distillation using Multiple Teacher Assistants
Wonchul Son
Jaemin Na
Junyong Choi
Wonjun Hwang
25
111
0
18 Sep 2020
MEAL V2: Boosting Vanilla ResNet-50 to 80%+ Top-1 Accuracy on ImageNet
  without Tricks
MEAL V2: Boosting Vanilla ResNet-50 to 80%+ Top-1 Accuracy on ImageNet without Tricks
Zhiqiang Shen
Marios Savvides
31
63
0
17 Sep 2020
Temporal Self-Ensembling Teacher for Semi-Supervised Object Detection
Temporal Self-Ensembling Teacher for Semi-Supervised Object Detection
Cong Chen
Shouyang Dong
Ye Tian
K. Cao
Li Liu
Yuanhao Guo
28
28
0
13 Jul 2020
Knowledge Distillation Beyond Model Compression
Knowledge Distillation Beyond Model Compression
F. Sarfraz
Elahe Arani
Bahram Zonooz
12
40
0
03 Jul 2020
Knowledge Distillation Meets Self-Supervision
Knowledge Distillation Meets Self-Supervision
Guodong Xu
Ziwei Liu
Xiaoxiao Li
Chen Change Loy
FedML
37
280
0
12 Jun 2020
Knowledge Distillation: A Survey
Knowledge Distillation: A Survey
Jianping Gou
B. Yu
Stephen J. Maybank
Dacheng Tao
VLM
19
2,843
0
09 Jun 2020
Self-Distillation as Instance-Specific Label Smoothing
Self-Distillation as Instance-Specific Label Smoothing
Zhilu Zhang
M. Sabuncu
20
116
0
09 Jun 2020
ResKD: Residual-Guided Knowledge Distillation
ResKD: Residual-Guided Knowledge Distillation
Xuewei Li
Songyuan Li
Bourahla Omar
Fei Wu
Xi Li
21
47
0
08 Jun 2020
Multi-view Contrastive Learning for Online Knowledge Distillation
Multi-view Contrastive Learning for Online Knowledge Distillation
Chuanguang Yang
Zhulin An
Yongjun Xu
22
23
0
07 Jun 2020
An Overview of Neural Network Compression
An Overview of Neural Network Compression
James OÑeill
AI4CE
45
98
0
05 Jun 2020
Structure-Level Knowledge Distillation For Multilingual Sequence
  Labeling
Structure-Level Knowledge Distillation For Multilingual Sequence Labeling
Xinyu Wang
Yong-jia Jiang
Nguyen Bach
Tao Wang
Fei Huang
Kewei Tu
28
36
0
08 Apr 2020
Knowing What, Where and When to Look: Efficient Video Action Modeling
  with Attention
Knowing What, Where and When to Look: Efficient Video Action Modeling with Attention
Juan-Manuel Perez-Rua
Brais Martínez
Xiatian Zhu
Antoine Toisoul
Victor Escorcia
Tao Xiang
48
19
0
02 Apr 2020
Self-Augmentation: Generalizing Deep Networks to Unseen Classes for
  Few-Shot Learning
Self-Augmentation: Generalizing Deep Networks to Unseen Classes for Few-Shot Learning
Jinhwan Seo
Hong G Jung
Seong-Whan Lee
SSL
12
39
0
01 Apr 2020
Self-Distillation Amplifies Regularization in Hilbert Space
Self-Distillation Amplifies Regularization in Hilbert Space
H. Mobahi
Mehrdad Farajtabar
Peter L. Bartlett
19
226
0
13 Feb 2020
Feature-map-level Online Adversarial Knowledge Distillation
Feature-map-level Online Adversarial Knowledge Distillation
Inseop Chung
Seonguk Park
Jangho Kim
Nojun Kwak
GAN
20
128
0
05 Feb 2020
Towards Oracle Knowledge Distillation with Neural Architecture Search
Towards Oracle Knowledge Distillation with Neural Architecture Search
Minsoo Kang
Jonghwan Mun
Bohyung Han
FedML
30
43
0
29 Nov 2019
QKD: Quantization-aware Knowledge Distillation
QKD: Quantization-aware Knowledge Distillation
Jangho Kim
Yash Bhalgat
Jinwon Lee
Chirag I. Patel
Nojun Kwak
MQ
21
63
0
28 Nov 2019
Contrastive Representation Distillation
Contrastive Representation Distillation
Yonglong Tian
Dilip Krishnan
Phillip Isola
47
1,031
0
23 Oct 2019
On the Efficacy of Knowledge Distillation
On the Efficacy of Knowledge Distillation
Ligang He
Rui Mao
17
598
0
03 Oct 2019
Deep Model Transferability from Attribution Maps
Deep Model Transferability from Attribution Maps
Mingli Song
Yixin Chen
Xinchao Wang
Chengchao Shen
Xiuming Zhang
27
54
0
26 Sep 2019
FEED: Feature-level Ensemble for Knowledge Distillation
FEED: Feature-level Ensemble for Knowledge Distillation
Seonguk Park
Nojun Kwak
FedML
15
41
0
24 Sep 2019
Knowledge Transfer Graph for Deep Collaborative Learning
Knowledge Transfer Graph for Deep Collaborative Learning
Soma Minami
Tsubasa Hirakawa
Takayoshi Yamashita
H. Fujiyoshi
30
9
0
10 Sep 2019
Adaptive Regularization of Labels
Adaptive Regularization of Labels
Qianggang Ding
Sifan Wu
Hao Sun
Jiadong Guo
Shutao Xia
ODL
18
29
0
15 Aug 2019
Similarity-Preserving Knowledge Distillation
Similarity-Preserving Knowledge Distillation
Frederick Tung
Greg Mori
43
959
0
23 Jul 2019
Feature Fusion for Online Mutual Knowledge Distillation
Feature Fusion for Online Mutual Knowledge Distillation
Jangho Kim
Minsung Hyun
Inseop Chung
Nojun Kwak
FedML
26
91
0
19 Apr 2019
Efficient Video Classification Using Fewer Frames
Efficient Video Classification Using Fewer Frames
S. Bhardwaj
Mukundhan Srinivasan
Mitesh M. Khapra
40
88
0
27 Feb 2019
Multilingual Neural Machine Translation with Knowledge Distillation
Multilingual Neural Machine Translation with Knowledge Distillation
Xu Tan
Yi Ren
Di He
Tao Qin
Zhou Zhao
Tie-Yan Liu
20
248
0
27 Feb 2019
Self-Referenced Deep Learning
Self-Referenced Deep Learning
Xu Lan
Xiatian Zhu
S. Gong
27
23
0
19 Nov 2018
Large scale distributed neural network training through online
  distillation
Large scale distributed neural network training through online distillation
Rohan Anil
Gabriel Pereyra
Alexandre Passos
Róbert Ormándi
George E. Dahl
Geoffrey E. Hinton
FedML
278
404
0
09 Apr 2018
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
308
2,890
0
15 Sep 2016
Previous
12