ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1710.10928
  4. Cited By
Optimization Landscape and Expressivity of Deep CNNs

Optimization Landscape and Expressivity of Deep CNNs

30 October 2017
Quynh N. Nguyen
Matthias Hein
ArXivPDFHTML

Papers citing "Optimization Landscape and Expressivity of Deep CNNs"

49 / 49 papers shown
Title
When is a Convolutional Filter Easy To Learn?
When is a Convolutional Filter Easy To Learn?
S. Du
Jason D. Lee
Yuandong Tian
MLT
53
130
0
18 Sep 2017
Learning Neural Networks with Two Nonlinear Layers in Polynomial Time
Learning Neural Networks with Two Nonlinear Layers in Polynomial Time
Surbhi Goel
Adam R. Klivans
58
52
0
18 Sep 2017
Global optimality conditions for deep neural networks
Global optimality conditions for deep neural networks
Chulhee Yun
S. Sra
Ali Jadbabaie
136
118
0
08 Jul 2017
Recovery Guarantees for One-hidden-layer Neural Networks
Recovery Guarantees for One-hidden-layer Neural Networks
Kai Zhong
Zhao Song
Prateek Jain
Peter L. Bartlett
Inderjit S. Dhillon
MLT
147
336
0
10 Jun 2017
Convergence Analysis of Two-layer Neural Networks with ReLU Activation
Convergence Analysis of Two-layer Neural Networks with ReLU Activation
Yuanzhi Li
Yang Yuan
MLT
132
650
0
28 May 2017
Learning ReLUs via Gradient Descent
Learning ReLUs via Gradient Descent
Mahdi Soltanolkotabi
MLT
68
181
0
10 May 2017
The loss surface of deep and wide neural networks
The loss surface of deep and wide neural networks
Quynh N. Nguyen
Matthias Hein
ODL
131
284
0
26 Apr 2017
Failures of Gradient-Based Deep Learning
Failures of Gradient-Based Deep Learning
Shai Shalev-Shwartz
Ohad Shamir
Shaked Shammah
ODL
UQCV
79
200
0
23 Mar 2017
An Analytical Formula of Population Gradient for two-layered ReLU
  network and its Applications in Convergence and Critical Point Analysis
An Analytical Formula of Population Gradient for two-layered ReLU network and its Applications in Convergence and Critical Point Analysis
Yuandong Tian
MLT
163
216
0
02 Mar 2017
Understanding Synthetic Gradients and Decoupled Neural Interfaces
Understanding Synthetic Gradients and Decoupled Neural Interfaces
Wojciech M. Czarnecki
G. Swirszcz
Max Jaderberg
Simon Osindero
Oriol Vinyals
Koray Kavukcuoglu
57
82
0
01 Mar 2017
Globally Optimal Gradient Descent for a ConvNet with Gaussian Inputs
Globally Optimal Gradient Descent for a ConvNet with Gaussian Inputs
Alon Brutzkus
Amir Globerson
MLT
145
313
0
26 Feb 2017
Exponentially vanishing sub-optimal local minima in multilayer neural
  networks
Exponentially vanishing sub-optimal local minima in multilayer neural networks
Daniel Soudry
Elad Hoffer
136
97
0
19 Feb 2017
Identity Matters in Deep Learning
Identity Matters in Deep Learning
Moritz Hardt
Tengyu Ma
OOD
81
398
0
14 Nov 2016
Understanding deep learning requires rethinking generalization
Understanding deep learning requires rethinking generalization
Chiyuan Zhang
Samy Bengio
Moritz Hardt
Benjamin Recht
Oriol Vinyals
HAI
314
4,624
0
10 Nov 2016
Topology and Geometry of Half-Rectified Network Optimization
Topology and Geometry of Half-Rectified Network Optimization
C. Freeman
Joan Bruna
171
235
0
04 Nov 2016
Why and When Can Deep -- but Not Shallow -- Networks Avoid the Curse of
  Dimensionality: a Review
Why and When Can Deep -- but Not Shallow -- Networks Avoid the Curse of Dimensionality: a Review
T. Poggio
H. Mhaskar
Lorenzo Rosasco
Brando Miranda
Q. Liao
97
578
0
02 Nov 2016
Depth-Width Tradeoffs in Approximating Natural Functions with Neural
  Networks
Depth-Width Tradeoffs in Approximating Natural Functions with Neural Networks
Itay Safran
Ohad Shamir
80
174
0
31 Oct 2016
Globally Optimal Training of Generalized Polynomial Neural Networks with
  Nonlinear Spectral Methods
Globally Optimal Training of Generalized Polynomial Neural Networks with Nonlinear Spectral Methods
A. Gautier
Quynh N. Nguyen
Matthias Hein
106
32
0
28 Oct 2016
Why Deep Neural Networks for Function Approximation?
Why Deep Neural Networks for Function Approximation?
Shiyu Liang
R. Srikant
107
385
0
13 Oct 2016
Xception: Deep Learning with Depthwise Separable Convolutions
Xception: Deep Learning with Depthwise Separable Convolutions
François Chollet
MDE
BDL
PINN
1.2K
14,543
0
07 Oct 2016
Understanding intermediate layers using linear classifier probes
Understanding intermediate layers using linear classifier probes
Guillaume Alain
Yoshua Bengio
FAtt
127
941
0
05 Oct 2016
Error bounds for approximations with deep ReLU networks
Error bounds for approximations with deep ReLU networks
Dmitry Yarotsky
175
1,226
0
03 Oct 2016
Distribution-Specific Hardness of Learning Neural Networks
Distribution-Specific Hardness of Learning Neural Networks
Ohad Shamir
68
116
0
05 Sep 2016
Deep vs. shallow networks : An approximation theory perspective
Deep vs. shallow networks : An approximation theory perspective
H. Mhaskar
T. Poggio
147
342
0
10 Aug 2016
On the Expressive Power of Deep Neural Networks
On the Expressive Power of Deep Neural Networks
M. Raghu
Ben Poole
Jon M. Kleinberg
Surya Ganguli
Jascha Narain Sohl-Dickstein
61
786
0
16 Jun 2016
ENet: A Deep Neural Network Architecture for Real-Time Semantic
  Segmentation
ENet: A Deep Neural Network Architecture for Real-Time Semantic Segmentation
Adam Paszke
Abhishek Chaurasia
Sangpil Kim
Eugenio Culurciello
SSeg
310
2,077
0
07 Jun 2016
Deep Learning without Poor Local Minima
Deep Learning without Poor Local Minima
Kenji Kawaguchi
ODL
199
922
0
23 May 2016
Convolutional Rectifier Networks as Generalized Tensor Decompositions
Convolutional Rectifier Networks as Generalized Tensor Decompositions
Nadav Cohen
Amnon Shashua
60
153
0
01 Mar 2016
SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB
  model size
SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size
F. Iandola
Song Han
Matthew W. Moskewicz
Khalid Ashraf
W. Dally
Kurt Keutzer
139
7,465
0
24 Feb 2016
Inception-v4, Inception-ResNet and the Impact of Residual Connections on
  Learning
Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning
Christian Szegedy
Sergey Ioffe
Vincent Vanhoucke
Alexander A. Alemi
352
14,223
0
23 Feb 2016
Benefits of depth in neural networks
Benefits of depth in neural networks
Matus Telgarsky
330
608
0
14 Feb 2016
The Power of Depth for Feedforward Neural Networks
The Power of Depth for Feedforward Neural Networks
Ronen Eldan
Ohad Shamir
195
732
0
12 Dec 2015
Deep Residual Learning for Image Recognition
Deep Residual Learning for Image Recognition
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
MedIm
1.9K
193,426
0
10 Dec 2015
Rethinking the Inception Architecture for Computer Vision
Rethinking the Inception Architecture for Computer Vision
Christian Szegedy
Vincent Vanhoucke
Sergey Ioffe
Jonathon Shlens
Z. Wojna
3DV
BDL
717
27,303
0
02 Dec 2015
Representation Benefits of Deep Feedforward Networks
Representation Benefits of Deep Feedforward Networks
Matus Telgarsky
76
242
0
27 Sep 2015
Global Optimality in Tensor Factorization, Deep Learning, and Beyond
Global Optimality in Tensor Factorization, Deep Learning, and Beyond
B. Haeffele
René Vidal
162
150
0
24 Jun 2015
Understanding Neural Networks Through Deep Visualization
Understanding Neural Networks Through Deep Visualization
J. Yosinski
Jeff Clune
Anh Totti Nguyen
Thomas J. Fuchs
Hod Lipson
FAtt
AI4CE
122
1,871
0
22 Jun 2015
Qualitatively characterizing neural network optimization problems
Qualitatively characterizing neural network optimization problems
Ian Goodfellow
Oriol Vinyals
Andrew M. Saxe
ODL
105
522
0
19 Dec 2014
Provable Methods for Training Neural Networks with Sparse Connectivity
Provable Methods for Training Neural Networks with Sparse Connectivity
Hanie Sedghi
Anima Anandkumar
59
64
0
08 Dec 2014
The Loss Surfaces of Multilayer Networks
The Loss Surfaces of Multilayer Networks
A. Choromańska
Mikael Henaff
Michaël Mathieu
Gerard Ben Arous
Yann LeCun
ODL
251
1,196
0
30 Nov 2014
Understanding Deep Image Representations by Inverting Them
Understanding Deep Image Representations by Inverting Them
Aravindh Mahendran
Andrea Vedaldi
FAtt
107
1,963
0
26 Nov 2014
How transferable are features in deep neural networks?
How transferable are features in deep neural networks?
J. Yosinski
Jeff Clune
Yoshua Bengio
Hod Lipson
OOD
196
8,321
0
06 Nov 2014
On the Computational Efficiency of Training Neural Networks
On the Computational Efficiency of Training Neural Networks
Roi Livni
Shai Shalev-Shwartz
Ohad Shamir
125
479
0
05 Oct 2014
Going Deeper with Convolutions
Going Deeper with Convolutions
Christian Szegedy
Wei Liu
Yangqing Jia
P. Sermanet
Scott E. Reed
Dragomir Anguelov
D. Erhan
Vincent Vanhoucke
Andrew Rabinovich
401
43,589
0
17 Sep 2014
Very Deep Convolutional Networks for Large-Scale Image Recognition
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan
Andrew Zisserman
FAtt
MDE
1.4K
100,213
0
04 Sep 2014
Identifying and attacking the saddle point problem in high-dimensional
  non-convex optimization
Identifying and attacking the saddle point problem in high-dimensional non-convex optimization
Yann N. Dauphin
Razvan Pascanu
Çağlar Gülçehre
Kyunghyun Cho
Surya Ganguli
Yoshua Bengio
ODL
123
1,384
0
10 Jun 2014
On the Number of Linear Regions of Deep Neural Networks
On the Number of Linear Regions of Deep Neural Networks
Guido Montúfar
Razvan Pascanu
Kyunghyun Cho
Yoshua Bengio
88
1,254
0
08 Feb 2014
On the number of response regions of deep feed forward networks with
  piece-wise linear activations
On the number of response regions of deep feed forward networks with piece-wise linear activations
Razvan Pascanu
Guido Montúfar
Yoshua Bengio
FAtt
111
257
0
20 Dec 2013
Visualizing and Understanding Convolutional Networks
Visualizing and Understanding Convolutional Networks
Matthew D. Zeiler
Rob Fergus
FAtt
SSL
486
15,861
0
12 Nov 2013
1