Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2002.12597
Cited By
An Efficient Method of Training Small Models for Regression Problems with Knowledge Distillation
28 February 2020
M. Takamoto
Yusuke Morishita
Hitoshi Imaoka
Re-assign community
ArXiv
PDF
HTML
Papers citing
"An Efficient Method of Training Small Models for Regression Problems with Knowledge Distillation"
8 / 8 papers shown
Title
Wing Loss for Robust Facial Landmark Localisation with Convolutional Neural Networks
Zhenhua Feng
J. Kittler
Muhammad Awais
P. Huber
Xiaojun Wu
CVBM
48
399
0
17 Nov 2017
A Closer Look at Memorization in Deep Networks
Devansh Arpit
Stanislaw Jastrzebski
Nicolas Ballas
David M. Krueger
Emmanuel Bengio
...
Tegan Maharaj
Asja Fischer
Aaron Courville
Yoshua Bengio
Simon Lacoste-Julien
TDI
120
1,814
0
16 Jun 2017
Understanding deep learning requires rethinking generalization
Chiyuan Zhang
Samy Bengio
Moritz Hardt
Benjamin Recht
Oriol Vinyals
HAI
320
4,624
0
10 Nov 2016
Wide Residual Networks
Sergey Zagoruyko
N. Komodakis
318
7,971
0
23 May 2016
Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding
Song Han
Huizi Mao
W. Dally
3DGS
239
8,821
0
01 Oct 2015
Robust Optimization for Deep Regression
Vasileios Belagiannis
Christian Rupprecht
G. Carneiro
Nassir Navab
3DH
145
180
0
25 May 2015
FitNets: Hints for Thin Deep Nets
Adriana Romero
Nicolas Ballas
Samira Ebrahimi Kahou
Antoine Chassang
C. Gatta
Yoshua Bengio
FedML
282
3,870
0
19 Dec 2014
Do Deep Nets Really Need to be Deep?
Lei Jimmy Ba
R. Caruana
160
2,117
0
21 Dec 2013
1