ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1606.05336
  4. Cited By
On the Expressive Power of Deep Neural Networks
v1v2v3v4v5v6 (latest)

On the Expressive Power of Deep Neural Networks

16 June 2016
M. Raghu
Ben Poole
Jon M. Kleinberg
Surya Ganguli
Jascha Narain Sohl-Dickstein
ArXiv (abs)PDFHTML

Papers citing "On the Expressive Power of Deep Neural Networks"

50 / 267 papers shown
Title
Empirical Bounds on Linear Regions of Deep Rectifier Networks
Empirical Bounds on Linear Regions of Deep Rectifier Networks
Thiago Serra
Srikumar Ramalingam
83
42
0
08 Oct 2018
Implicit Self-Regularization in Deep Neural Networks: Evidence from
  Random Matrix Theory and Implications for Learning
Implicit Self-Regularization in Deep Neural Networks: Evidence from Random Matrix Theory and Implications for Learning
Charles H. Martin
Michael W. Mahoney
AI4CE
137
201
0
02 Oct 2018
Deep Neural Networks for Estimation and Inference
Deep Neural Networks for Estimation and Inference
M. Farrell
Tengyuan Liang
S. Misra
BDL
186
256
0
26 Sep 2018
The jamming transition as a paradigm to understand the loss landscape of
  deep neural networks
The jamming transition as a paradigm to understand the loss landscape of deep neural networks
Mario Geiger
S. Spigler
Stéphane dÁscoli
Levent Sagun
Marco Baity-Jesi
Giulio Biroli
Matthieu Wyart
86
143
0
25 Sep 2018
Towards Understanding Regularization in Batch Normalization
Towards Understanding Regularization in Batch Normalization
Ping Luo
Xinjiang Wang
Wenqi Shao
Zhanglin Peng
MLTAI4CE
89
180
0
04 Sep 2018
Deep Learning of Vortex Induced Vibrations
Deep Learning of Vortex Induced Vibrations
M. Raissi
Zhicheng Wang
M. Triantafyllou
George Karniadakis
AI4CE
81
378
0
26 Aug 2018
Learning ReLU Networks on Linearly Separable Data: Algorithm,
  Optimality, and Generalization
Learning ReLU Networks on Linearly Separable Data: Algorithm, Optimality, and Generalization
G. Wang
G. Giannakis
Jie Chen
MLT
83
132
0
14 Aug 2018
Hidden Fluid Mechanics: A Navier-Stokes Informed Deep Learning Framework
  for Assimilating Flow Visualization Data
Hidden Fluid Mechanics: A Navier-Stokes Informed Deep Learning Framework for Assimilating Flow Visualization Data
M. Raissi
A. Yazdani
George Karniadakis
AI4CEPINN
102
161
0
13 Aug 2018
Bounds on the Approximation Power of Feedforward Neural Networks
Bounds on the Approximation Power of Feedforward Neural Networks
M. Mehrabi
A. Tchamkerten
Mansoor I. Yousefi
69
12
0
29 Jun 2018
Adversarial Reprogramming of Neural Networks
Adversarial Reprogramming of Neural Networks
Gamaleldin F. Elsayed
Ian Goodfellow
Jascha Narain Sohl-Dickstein
OODAAML
55
183
0
28 Jun 2018
On the Spectral Bias of Neural Networks
On the Spectral Bias of Neural Networks
Nasim Rahaman
A. Baratin
Devansh Arpit
Felix Dräxler
Min Lin
Fred Hamprecht
Yoshua Bengio
Aaron Courville
172
1,463
0
22 Jun 2018
Learning One-hidden-layer ReLU Networks via Gradient Descent
Learning One-hidden-layer ReLU Networks via Gradient Descent
Xiao Zhang
Yaodong Yu
Lingxiao Wang
Quanquan Gu
MLT
129
135
0
20 Jun 2018
How Could Polyhedral Theory Harness Deep Learning?
How Could Polyhedral Theory Harness Deep Learning?
Thiago Serra
Christian Tjandraatmadja
Srikumar Ramalingam
AI4CE
32
0
0
17 Jun 2018
A Framework for the construction of upper bounds on the number of affine
  linear regions of ReLU feed-forward neural networks
A Framework for the construction of upper bounds on the number of affine linear regions of ReLU feed-forward neural networks
Peter Hinz
Sara van de Geer
67
23
0
05 Jun 2018
Universal Statistics of Fisher Information in Deep Neural Networks: Mean
  Field Approach
Universal Statistics of Fisher Information in Deep Neural Networks: Mean Field Approach
Ryo Karakida
S. Akaho
S. Amari
FedML
199
146
0
04 Jun 2018
The Nonlinearity Coefficient - Predicting Generalization in Deep Neural
  Networks
The Nonlinearity Coefficient - Predicting Generalization in Deep Neural Networks
George Philipp
J. Carbonell
66
14
0
01 Jun 2018
Entropy and mutual information in models of deep neural networks
Entropy and mutual information in models of deep neural networks
Marylou Gabrié
Andre Manoel
Clément Luneau
Jean Barbier
N. Macris
Florent Krzakala
Lenka Zdeborová
85
179
0
24 May 2018
Mean Field Theory of Activation Functions in Deep Neural Networks
Mean Field Theory of Activation Functions in Deep Neural Networks
M. Milletarí
Thiparat Chotibut
P. E. Trevisanutto
30
4
0
22 May 2018
A Tropical Approach to Neural Networks with Piecewise Linear Activations
A Tropical Approach to Neural Networks with Piecewise Linear Activations
Vasileios Charisopoulos
Petros Maragos
78
40
0
22 May 2018
Tropical Geometry of Deep Neural Networks
Tropical Geometry of Deep Neural Networks
Liwen Zhang
Gregory Naitzat
Lek-Heng Lim
95
140
0
18 May 2018
End-to-end Learning of a Convolutional Neural Network via Deep Tensor
  Decomposition
End-to-end Learning of a Convolutional Neural Network via Deep Tensor Decomposition
Samet Oymak
Mahdi Soltanolkotabi
87
12
0
16 May 2018
Improving GAN Training via Binarized Representation Entropy (BRE)
  Regularization
Improving GAN Training via Binarized Representation Entropy (BRE) Regularization
Yanshuai Cao
G. Ding
Kry Yik-Chau Lui
Ruitong Huang
GAN
65
19
0
09 May 2018
What do Deep Networks Like to See?
What do Deep Networks Like to See?
Sebastián M. Palacio
Joachim Folz
Jörn Hees
Federico Raue
Damian Borth
Andreas Dengel
SSL
51
30
0
22 Mar 2018
Learning the Localization Function: Machine Learning Approach to
  Fingerprinting Localization
Learning the Localization Function: Machine Learning Approach to Fingerprinting Localization
Linchen Xiao
Arash Behboodi
R. Mathar
29
12
0
21 Mar 2018
On the importance of single directions for generalization
On the importance of single directions for generalization
Ari S. Morcos
David Barrett
Neil C. Rabinowitz
M. Botvinick
101
332
0
19 Mar 2018
Generalization and Expressivity for Deep Nets
Generalization and Expressivity for Deep Nets
Shao-Bo Lin
77
45
0
10 Mar 2018
How to Start Training: The Effect of Initialization and Architecture
How to Start Training: The Effect of Initialization and Architecture
Boris Hanin
David Rolnick
113
257
0
05 Mar 2018
Neural Networks Should Be Wide Enough to Learn Disconnected Decision
  Regions
Neural Networks Should Be Wide Enough to Learn Disconnected Decision Regions
Quynh N. Nguyen
Mahesh Chandra Mukkamala
Matthias Hein
MLT
109
56
0
28 Feb 2018
Sensitivity and Generalization in Neural Networks: an Empirical Study
Sensitivity and Generalization in Neural Networks: an Empirical Study
Roman Novak
Yasaman Bahri
Daniel A. Abolafia
Jeffrey Pennington
Jascha Narain Sohl-Dickstein
AAML
99
442
0
23 Feb 2018
On the Optimization of Deep Networks: Implicit Acceleration by
  Overparameterization
On the Optimization of Deep Networks: Implicit Acceleration by Overparameterization
Sanjeev Arora
Nadav Cohen
Elad Hazan
132
488
0
19 Feb 2018
On Characterizing the Capacity of Neural Networks using Algebraic
  Topology
On Characterizing the Capacity of Neural Networks using Algebraic Topology
William H. Guss
Ruslan Salakhutdinov
86
90
0
13 Feb 2018
Intriguing Properties of Randomly Weighted Networks: Generalizing While
  Learning Next to Nothing
Intriguing Properties of Randomly Weighted Networks: Generalizing While Learning Next to Nothing
Amir Rosenfeld
John K. Tsotsos
MLT
75
52
0
02 Feb 2018
Deep Learning Works in Practice. But Does it Work in Theory?
Deep Learning Works in Practice. But Does it Work in Theory?
L. Hoang
R. Guerraoui
PINN
53
3
0
31 Jan 2018
Bayesian Deep Convolutional Encoder-Decoder Networks for Surrogate
  Modeling and Uncertainty Quantification
Bayesian Deep Convolutional Encoder-Decoder Networks for Surrogate Modeling and Uncertainty Quantification
Yinhao Zhu
N. Zabaras
UQCVBDL
115
649
0
21 Jan 2018
Deep Hidden Physics Models: Deep Learning of Nonlinear Partial
  Differential Equations
Deep Hidden Physics Models: Deep Learning of Nonlinear Partial Differential Equations
M. Raissi
PINNAI4CE
132
759
0
20 Jan 2018
Which Neural Net Architectures Give Rise To Exploding and Vanishing
  Gradients?
Which Neural Net Architectures Give Rise To Exploding and Vanishing Gradients?
Boris Hanin
110
256
0
11 Jan 2018
The exploding gradient problem demystified - definition, prevalence,
  impact, origin, tradeoffs, and solutions
The exploding gradient problem demystified - definition, prevalence, impact, origin, tradeoffs, and solutions
George Philipp
Basel Alomair
J. Carbonell
ODL
92
46
0
15 Dec 2017
A trans-disciplinary review of deep learning research for water
  resources scientists
A trans-disciplinary review of deep learning research for water resources scientists
Chaopeng Shen
AI4CE
226
702
0
06 Dec 2017
Adversarial Feature Augmentation for Unsupervised Domain Adaptation
Adversarial Feature Augmentation for Unsupervised Domain Adaptation
Riccardo Volpi
Pietro Morerio
Silvio Savarese
Vittorio Murino
GANOOD
115
234
0
23 Nov 2017
Bounding and Counting Linear Regions of Deep Neural Networks
Bounding and Counting Linear Regions of Deep Neural Networks
Thiago Serra
Christian Tjandraatmadja
Srikumar Ramalingam
MLT
114
251
0
06 Nov 2017
An efficient quantum algorithm for generative machine learning
An efficient quantum algorithm for generative machine learning
Xun Gao
Zhengyu Zhang
L. Duan
90
25
0
06 Nov 2017
Expressive power of recurrent neural networks
Expressive power of recurrent neural networks
Valentin Khrulkov
Alexander Novikov
Ivan Oseledets
116
114
0
02 Nov 2017
Approximating Continuous Functions by ReLU Nets of Minimal Width
Approximating Continuous Functions by ReLU Nets of Minimal Width
Boris Hanin
Mark Sellke
120
239
0
31 Oct 2017
Optimization Landscape and Expressivity of Deep CNNs
Optimization Landscape and Expressivity of Deep CNNs
Quynh N. Nguyen
Matthias Hein
101
29
0
30 Oct 2017
A Correspondence Between Random Neural Networks and Statistical Field
  Theory
A Correspondence Between Random Neural Networks and Statistical Field Theory
S. Schoenholz
Jeffrey Pennington
Jascha Narain Sohl-Dickstein
94
20
0
18 Oct 2017
Searching for Activation Functions
Searching for Activation Functions
Prajit Ramachandran
Barret Zoph
Quoc V. Le
97
612
0
16 Oct 2017
Training Feedforward Neural Networks with Standard Logistic Activations
  is Feasible
Training Feedforward Neural Networks with Standard Logistic Activations is Feasible
Emanuele Sansone
F. D. De Natale
29
4
0
03 Oct 2017
When is a Convolutional Filter Easy To Learn?
When is a Convolutional Filter Easy To Learn?
S. Du
Jason D. Lee
Yuandong Tian
MLT
67
130
0
18 Sep 2017
Learning Deep Neural Network Representations for Koopman Operators of
  Nonlinear Dynamical Systems
Learning Deep Neural Network Representations for Koopman Operators of Nonlinear Dynamical Systems
Enoch Yeung
Soumya Kundu
Nathan Oken Hodas
AI4CE
85
387
0
22 Aug 2017
Deep Learning the Ising Model Near Criticality
Deep Learning the Ising Model Near Criticality
A. Morningstar
R. Melko
AI4CE
66
86
0
15 Aug 2017
Previous
123456
Next