ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1512.03965
  4. Cited By
The Power of Depth for Feedforward Neural Networks

The Power of Depth for Feedforward Neural Networks

12 December 2015
Ronen Eldan
Ohad Shamir
ArXivPDFHTML

Papers citing "The Power of Depth for Feedforward Neural Networks"

50 / 367 papers shown
Title
When Can Neural Networks Learn Connected Decision Regions?
When Can Neural Networks Learn Connected Decision Regions?
Trung Le
Dinh Q. Phung
MLT
23
1
0
25 Jan 2019
On Connected Sublevel Sets in Deep Learning
On Connected Sublevel Sets in Deep Learning
Quynh N. Nguyen
19
102
0
22 Jan 2019
Deep Neural Network Approximation Theory
Deep Neural Network Approximation Theory
Dennis Elbrächter
Dmytro Perekrestenko
Philipp Grohs
Helmut Bölcskei
16
207
0
08 Jan 2019
The capacity of feedforward neural networks
The capacity of feedforward neural networks
Pierre Baldi
Roman Vershynin
17
67
0
02 Jan 2019
Fast convergence rates of deep neural networks for classification
Fast convergence rates of deep neural networks for classification
Yongdai Kim
Ilsang Ohn
Dongha Kim
3DH
3DV
11
79
0
10 Dec 2018
On variation of gradients of deep neural networks
On variation of gradients of deep neural networks
Yongdai Kim
Dongha Kim
ODL
FAtt
MLT
17
0
0
02 Dec 2018
Sequentially Aggregated Convolutional Networks
Sequentially Aggregated Convolutional Networks
Yiwen Huang
Rihui Wu
Pinglai Ou
Ziyong Feng
21
1
0
27 Nov 2018
A Differential Topological View of Challenges in Learning with
  Feedforward Neural Networks
A Differential Topological View of Challenges in Learning with Feedforward Neural Networks
Hao Shen
AAML
AI4CE
23
6
0
26 Nov 2018
Enhanced Expressive Power and Fast Training of Neural Networks by Random
  Projections
Enhanced Expressive Power and Fast Training of Neural Networks by Random Projections
Jian-Feng Cai
Dong Li
Jiaze Sun
Ke Wang
22
5
0
22 Nov 2018
On a Sparse Shortcut Topology of Artificial Neural Networks
On a Sparse Shortcut Topology of Artificial Neural Networks
Fenglei Fan
Dayang Wang
Hengtao Guo
Qikui Zhu
Pingkun Yan
Ge Wang
Hengyong Yu
38
22
0
22 Nov 2018
Data Driven Governing Equations Approximation Using Deep Neural Networks
Data Driven Governing Equations Approximation Using Deep Neural Networks
Tong Qin
Kailiang Wu
D. Xiu
PINN
34
270
0
13 Nov 2018
Statistical Characteristics of Deep Representations: An Empirical
  Investigation
Statistical Characteristics of Deep Representations: An Empirical Investigation
Daeyoung Choi
Kyungeun Lee
Changho Shin
Stephen J. Roberts
AI4TS
18
2
0
08 Nov 2018
Convergence of the Deep BSDE Method for Coupled FBSDEs
Convergence of the Deep BSDE Method for Coupled FBSDEs
Jiequn Han
Jihao Long
13
156
0
03 Nov 2018
Size-Noise Tradeoffs in Generative Networks
Size-Noise Tradeoffs in Generative Networks
Bolton Bailey
Matus Telgarsky
21
20
0
26 Oct 2018
Small ReLU networks are powerful memorizers: a tight analysis of
  memorization capacity
Small ReLU networks are powerful memorizers: a tight analysis of memorization capacity
Chulhee Yun
S. Sra
Ali Jadbabaie
26
117
0
17 Oct 2018
On the Approximation Properties of Random ReLU Features
On the Approximation Properties of Random ReLU Features
Yitong Sun
A. Gilbert
Ambuj Tewari
10
3
0
10 Oct 2018
Deep Neural Network Compression for Aircraft Collision Avoidance Systems
Deep Neural Network Compression for Aircraft Collision Avoidance Systems
Kyle D. Julian
Mykel J. Kochenderfer
Michael P. Owen
20
169
0
09 Oct 2018
Understanding Weight Normalized Deep Neural Networks with Rectified
  Linear Units
Understanding Weight Normalized Deep Neural Networks with Rectified Linear Units
Yixi Xu
Tianlin Li
MQ
31
12
0
03 Oct 2018
Image as Data: Automated Visual Content Analysis for Political Science
Image as Data: Automated Visual Content Analysis for Political Science
Jungseock Joo
Zachary C. Steinert-Threlkeld
24
40
0
03 Oct 2018
Deep, Skinny Neural Networks are not Universal Approximators
Deep, Skinny Neural Networks are not Universal Approximators
Jesse Johnson
11
65
0
30 Sep 2018
The jamming transition as a paradigm to understand the loss landscape of
  deep neural networks
The jamming transition as a paradigm to understand the loss landscape of deep neural networks
Mario Geiger
S. Spigler
Stéphane dÁscoli
Levent Sagun
Marco Baity-Jesi
Giulio Biroli
M. Wyart
24
141
0
25 Sep 2018
A proof that deep artificial neural networks overcome the curse of
  dimensionality in the numerical approximation of Kolmogorov partial
  differential equations with constant diffusion and nonlinear drift
  coefficients
A proof that deep artificial neural networks overcome the curse of dimensionality in the numerical approximation of Kolmogorov partial differential equations with constant diffusion and nonlinear drift coefficients
Arnulf Jentzen
Diyora Salimova
Timo Welti
AI4CE
16
116
0
19 Sep 2018
Analysis of the Generalization Error: Empirical Risk Minimization over
  Deep Artificial Neural Networks Overcomes the Curse of Dimensionality in the
  Numerical Approximation of Black-Scholes Partial Differential Equations
Analysis of the Generalization Error: Empirical Risk Minimization over Deep Artificial Neural Networks Overcomes the Curse of Dimensionality in the Numerical Approximation of Black-Scholes Partial Differential Equations
Julius Berner
Philipp Grohs
Arnulf Jentzen
14
181
0
09 Sep 2018
A proof that artificial neural networks overcome the curse of
  dimensionality in the numerical approximation of Black-Scholes partial
  differential equations
A proof that artificial neural networks overcome the curse of dimensionality in the numerical approximation of Black-Scholes partial differential equations
Philipp Grohs
F. Hornung
Arnulf Jentzen
Philippe von Wurstemberger
16
167
0
07 Sep 2018
Wide Activation for Efficient and Accurate Image Super-Resolution
Wide Activation for Efficient and Accurate Image Super-Resolution
Jiahui Yu
Yuchen Fan
Jianchao Yang
N. Xu
Zhaowen Wang
Xinchao Wang
Thomas Huang
SupR
19
355
0
27 Aug 2018
Training Deeper Neural Machine Translation Models with Transparent
  Attention
Training Deeper Neural Machine Translation Models with Transparent Attention
Ankur Bapna
Mengzhao Chen
Orhan Firat
Yuan Cao
Yonghui Wu
29
138
0
22 Aug 2018
Deep Learning for Energy Markets
Deep Learning for Energy Markets
Michael Polson
Vadim Sokolov
AI4TS
9
26
0
16 Aug 2018
Genre-Agnostic Key Classification With Convolutional Neural Networks
Genre-Agnostic Key Classification With Convolutional Neural Networks
Filip Korzeniowski
Gerhard Widmer
11
31
0
16 Aug 2018
Collapse of Deep and Narrow Neural Nets
Collapse of Deep and Narrow Neural Nets
Lu Lu
Yanhui Su
George Karniadakis
ODL
19
153
0
15 Aug 2018
Universal Approximation with Quadratic Deep Networks
Universal Approximation with Quadratic Deep Networks
Fenglei Fan
Jinjun Xiong
Ge Wang
PINN
36
78
0
31 Jul 2018
Are Efficient Deep Representations Learnable?
Are Efficient Deep Representations Learnable?
Maxwell Nye
Andrew M. Saxe
14
24
0
17 Jul 2018
Semi-supervised Feature Learning For Improving Writer Identification
Semi-supervised Feature Learning For Improving Writer Identification
Shiming Chen
Yisong Wang
Chin-Teng Lin
Weiping Ding
Zehong Cao
11
53
0
15 Jul 2018
Training Neural Networks Using Features Replay
Training Neural Networks Using Features Replay
Zhouyuan Huo
Bin Gu
Heng-Chiao Huang
22
69
0
12 Jul 2018
ResNet with one-neuron hidden layers is a Universal Approximator
ResNet with one-neuron hidden layers is a Universal Approximator
Hongzhou Lin
Stefanie Jegelka
41
227
0
28 Jun 2018
On the Spectral Bias of Neural Networks
On the Spectral Bias of Neural Networks
Nasim Rahaman
A. Baratin
Devansh Arpit
Felix Dräxler
Min Lin
Fred Hamprecht
Yoshua Bengio
Aaron Courville
57
1,394
0
22 Jun 2018
On Tighter Generalization Bound for Deep Neural Networks: CNNs, ResNets,
  and Beyond
On Tighter Generalization Bound for Deep Neural Networks: CNNs, ResNets, and Beyond
Xingguo Li
Junwei Lu
Zhaoran Wang
Jarvis Haupt
T. Zhao
27
78
0
13 Jun 2018
The Nonlinearity Coefficient - Predicting Generalization in Deep Neural
  Networks
The Nonlinearity Coefficient - Predicting Generalization in Deep Neural Networks
George Philipp
J. Carbonell
23
14
0
01 Jun 2018
Interpreting Deep Learning: The Machine Learning Rorschach Test?
Interpreting Deep Learning: The Machine Learning Rorschach Test?
Adam S. Charles
AAML
HAI
AI4CE
24
9
0
01 Jun 2018
Representational Power of ReLU Networks and Polynomial Kernels: Beyond
  Worst-Case Analysis
Representational Power of ReLU Networks and Polynomial Kernels: Beyond Worst-Case Analysis
Frederic Koehler
Andrej Risteski
13
12
0
29 May 2018
Universality of Deep Convolutional Neural Networks
Universality of Deep Convolutional Neural Networks
Ding-Xuan Zhou
HAI
PINN
10
509
0
28 May 2018
Understanding Generalization and Optimization Performance of Deep CNNs
Understanding Generalization and Optimization Performance of Deep CNNs
Pan Zhou
Jiashi Feng
MLT
17
48
0
28 May 2018
Learning Restricted Boltzmann Machines via Influence Maximization
Learning Restricted Boltzmann Machines via Influence Maximization
Guy Bresler
Frederic Koehler
Ankur Moitra
Elchanan Mossel
AI4CE
20
29
0
25 May 2018
Adversarially Robust Training through Structured Gradient Regularization
Adversarially Robust Training through Structured Gradient Regularization
Kevin Roth
Aurelien Lucchi
Sebastian Nowozin
Thomas Hofmann
27
23
0
22 May 2018
Reducing Parameter Space for Neural Network Training
Reducing Parameter Space for Neural Network Training
Tong Qin
Ling Zhou
D. Xiu
14
6
0
22 May 2018
Butterfly-Net: Optimal Function Representation Based on Convolutional
  Neural Networks
Butterfly-Net: Optimal Function Representation Based on Convolutional Neural Networks
Yingzhou Li
Xiuyuan Cheng
Jianfeng Lu
21
23
0
18 May 2018
Tropical Geometry of Deep Neural Networks
Tropical Geometry of Deep Neural Networks
Liwen Zhang
Gregory Naitzat
Lek-Heng Lim
37
137
0
18 May 2018
Doing the impossible: Why neural networks can be trained at all
Doing the impossible: Why neural networks can be trained at all
Nathan Oken Hodas
P. Stinis
AI4CE
23
19
0
13 May 2018
Gradient Descent for One-Hidden-Layer Neural Networks: Polynomial
  Convergence and SQ Lower Bounds
Gradient Descent for One-Hidden-Layer Neural Networks: Polynomial Convergence and SQ Lower Bounds
Santosh Vempala
John Wilmes
MLT
8
50
0
07 May 2018
Decoupled Parallel Backpropagation with Convergence Guarantee
Decoupled Parallel Backpropagation with Convergence Guarantee
Zhouyuan Huo
Bin Gu
Qian Yang
Heng-Chiao Huang
23
97
0
27 Apr 2018
A comparison of deep networks with ReLU activation function and linear
  spline-type methods
A comparison of deep networks with ReLU activation function and linear spline-type methods
Konstantin Eckle
Johannes Schmidt-Hieber
17
322
0
06 Apr 2018
Previous
12345678
Next