Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2103.13811
Cited By
Student Network Learning via Evolutionary Knowledge Distillation
23 March 2021
Kangkai Zhang
Chunhui Zhang
Shikun Li
Dan Zeng
Shiming Ge
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Student Network Learning via Evolutionary Knowledge Distillation"
12 / 12 papers shown
Title
Rotation Perturbation Robustness in Point Cloud Analysis: A Perspective of Manifold Distillation
Xinyu Xu
Huazhen Liu
Feiming Wei
Huilin Xiong
W. Yu
Tao Zhang
3DPC
31
0
0
04 Nov 2024
A Lightweight Target-Driven Network of Stereo Matching for Inland Waterways
Jing Su
Yiqing Zhou
Yu Zhang
Chao Wang
Yi Wei
3DV
28
0
0
10 Oct 2024
DiReDi: Distillation and Reverse Distillation for AIoT Applications
Chen Sun
Qing Tong
Wenshuang Yang
Wenqi Zhang
29
0
0
12 Sep 2024
Self-Supervised Visual Representation Learning via Residual Momentum
T. Pham
Axi Niu
Zhang Kang
Sultan Rizky Hikmawan Madjid
Jiajing Hong
Daehyeok Kim
Joshua Tian Jin Tee
Chang D. Yoo
SSL
46
6
0
17 Nov 2022
Online Knowledge Distillation via Mutual Contrastive Learning for Visual Recognition
Chuanguang Yang
Zhulin An
Helong Zhou
Fuzhen Zhuang
Yongjun Xu
Qian Zhang
39
50
0
23 Jul 2022
Selective-Supervised Contrastive Learning with Noisy Labels
Shikun Li
Xiaobo Xia
Shiming Ge
Tongliang Liu
NoLa
24
172
0
08 Mar 2022
Augmenting Knowledge Distillation With Peer-To-Peer Mutual Learning For Model Compression
Usma Niyaz
Deepti R. Bathula
18
8
0
21 Oct 2021
Knowledge Distillation Using Hierarchical Self-Supervision Augmented Distribution
Chuanguang Yang
Zhulin An
Linhang Cai
Yongjun Xu
22
15
0
07 Sep 2021
Show, Attend and Distill:Knowledge Distillation via Attention-based Feature Matching
Mingi Ji
Byeongho Heo
Sungrae Park
65
143
0
05 Feb 2021
Knowledge Distillation by On-the-Fly Native Ensemble
Xu Lan
Xiatian Zhu
S. Gong
195
473
0
12 Jun 2018
Large scale distributed neural network training through online distillation
Rohan Anil
Gabriel Pereyra
Alexandre Passos
Róbert Ormándi
George E. Dahl
Geoffrey E. Hinton
FedML
278
404
0
09 Apr 2018
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
284
2,890
0
15 Sep 2016
1