ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1603.05691
  4. Cited By
Do Deep Convolutional Nets Really Need to be Deep and Convolutional?
v1v2v3v4 (latest)

Do Deep Convolutional Nets Really Need to be Deep and Convolutional?

17 March 2016
G. Urban
Krzysztof J. Geras
Samira Ebrahimi Kahou
Ozlem Aslan
Shengjie Wang
R. Caruana
Abdel-rahman Mohamed
Matthai Philipose
Matthew Richardson
ArXiv (abs)PDFHTML

Papers citing "Do Deep Convolutional Nets Really Need to be Deep and Convolutional?"

18 / 18 papers shown
Title
Deep Residual Learning for Image Recognition
Deep Residual Learning for Image Recognition
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
MedIm
2.3K
194,641
0
10 Dec 2015
Blending LSTMs into CNNs
Blending LSTMs into CNNs
Krzysztof J. Geras
Abdel-rahman Mohamed
R. Caruana
G. Urban
Shengjie Wang
Ozlem Aslan
Matthai Philipose
Matthew Richardson
Charles Sutton
70
60
0
19 Nov 2015
Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning
Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning
Emilio Parisotto
Jimmy Lei Ba
Ruslan Salakhutdinov
OffRL
113
599
0
19 Nov 2015
Policy Distillation
Policy Distillation
Andrei A. Rusu
Sergio Gomez Colmenarejo
Çağlar Gülçehre
Guillaume Desjardins
J. Kirkpatrick
Razvan Pascanu
Volodymyr Mnih
Koray Kavukcuoglu
R. Hadsell
105
696
0
19 Nov 2015
How far can we go without convolution: Improving fully-connected
  networks
How far can we go without convolution: Improving fully-connected networks
Zhouhan Lin
Roland Memisevic
K. Konda
61
52
0
09 Nov 2015
Training Very Deep Networks
Training Very Deep Networks
R. Srivastava
Klaus Greff
Jürgen Schmidhuber
165
1,687
0
22 Jul 2015
Transferring Knowledge from a RNN to a DNN
Transferring Knowledge from a RNN to a DNN
William Chan
Nan Rosemary Ke
Ian Lane
71
75
0
07 Apr 2015
Distilling the Knowledge in a Neural Network
Distilling the Knowledge in a Neural Network
Geoffrey E. Hinton
Oriol Vinyals
J. Dean
FedML
367
19,745
0
09 Mar 2015
Scalable Bayesian Optimization Using Deep Neural Networks
Scalable Bayesian Optimization Using Deep Neural Networks
Jasper Snoek
Oren Rippel
Kevin Swersky
Ryan Kiros
N. Satish
N. Sundaram
Md. Mostofa Ali Patwary
P. Prabhat
Ryan P. Adams
BDLUQCV
97
1,045
0
19 Feb 2015
FitNets: Hints for Thin Deep Nets
FitNets: Hints for Thin Deep Nets
Adriana Romero
Nicolas Ballas
Samira Ebrahimi Kahou
Antoine Chassang
C. Gatta
Yoshua Bengio
FedML
328
3,906
0
19 Dec 2014
Very Deep Convolutional Networks for Large-Scale Image Recognition
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan
Andrew Zisserman
FAttMDE
1.7K
100,575
0
04 Sep 2014
Scheduled denoising autoencoders
Scheduled denoising autoencoders
Krzysztof J. Geras
Charles Sutton
103
47
0
12 Jun 2014
Zero-bias autoencoders and the benefits of co-adapting features
Zero-bias autoencoders and the benefits of co-adapting features
K. Konda
Roland Memisevic
David M. Krueger
AI4CE
106
92
0
13 Feb 2014
Do Deep Nets Really Need to be Deep?
Do Deep Nets Really Need to be Deep?
Lei Jimmy Ba
R. Caruana
188
2,120
0
21 Dec 2013
Understanding Deep Architectures using a Recursive Convolutional Network
Understanding Deep Architectures using a Recursive Convolutional Network
David Eigen
J. Rolfe
Rob Fergus
Yann LeCun
AI4CEFAtt
99
146
0
06 Dec 2013
Big Neural Networks Waste Capacity
Big Neural Networks Waste Capacity
Yann N. Dauphin
Yoshua Bengio
110
84
0
16 Jan 2013
Theano: new features and speed improvements
Theano: new features and speed improvements
Frédéric Bastien
Pascal Lamblin
Razvan Pascanu
James Bergstra
Ian Goodfellow
Arnaud Bergeron
Nicolas Bouchard
David Warde-Farley
Yoshua Bengio
100
1,420
0
23 Nov 2012
Practical Bayesian Optimization of Machine Learning Algorithms
Practical Bayesian Optimization of Machine Learning Algorithms
Jasper Snoek
Hugo Larochelle
Ryan P. Adams
382
7,981
0
13 Jun 2012
1