Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2204.01368
Cited By
Training Fully Connected Neural Networks is
∃
R
\exists\mathbb{R}
∃
R
-Complete
4 April 2022
Daniel Bertschinger
Christoph Hertrich
Paul Jungeblut
Tillmann Miltzow
Simon Weber
OffRL
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Training Fully Connected Neural Networks is $\exists\mathbb{R}$-Complete"
48 / 48 papers shown
Title
When Deep Learning Meets Polyhedral Theory: A Survey
Joey Huchette
Gonzalo Muñoz
Thiago Serra
Calvin Tsay
AI4CE
129
37
0
29 Apr 2023
Training Neural Networks is NP-Hard in Fixed Dimension
Vincent Froese
Christoph Hertrich
79
8
0
29 Mar 2023
Lower Bounds on the Depth of Integral ReLU Neural Networks via Lattice Polytopes
Christian Haase
Christoph Hertrich
Georg Loho
48
22
0
24 Feb 2023
Hardness of Noise-Free Learning for Two-Hidden-Layer Neural Networks
Sitan Chen
Aravind Gollakota
Adam R. Klivans
Raghu Meka
49
30
0
10 Feb 2022
Neural networks with linear threshold activations: structure and algorithms
Sammy Khalife
Hongyu Cheng
A. Basu
64
16
0
15 Nov 2021
On the Optimal Memorization Power of ReLU Neural Networks
Gal Vardi
Gilad Yehudai
Ohad Shamir
43
32
0
07 Oct 2021
On minimal representations of shallow ReLU networks
Steffen Dereich
Sebastian Kassing
FAtt
26
15
0
12 Aug 2021
On Classifying Continuous Constraint Satisfaction Problems
Tillmann Miltzow
R. F. Schmiermann
50
18
0
04 Jun 2021
Covering Polygons is Even Harder
Mikkel Abrahamsen
65
25
0
04 Jun 2021
Towards Lower Bounds on the Depth of ReLU Neural Networks
Christoph Hertrich
A. Basu
M. D. Summa
M. Skutella
57
43
0
31 May 2021
The Computational Complexity of ReLU Network Training Parameterized by Data Dimensionality
Vincent Froese
Christoph Hertrich
R. Niedermeier
28
24
0
18 May 2021
The Modern Mathematics of Deep Learning
Julius Berner
Philipp Grohs
Gitta Kutyniok
P. Petersen
41
116
0
09 May 2021
Sharp bounds for the number of regions of maxout networks and vertices of Minkowski sums
Guido Montúfar
Yue Ren
Leon Zhang
37
41
0
16 Apr 2021
ReLU Neural Networks of Polynomial Size for Exact Maximum Flow Computation
Christoph Hertrich
Leon Sering
56
10
0
12 Feb 2021
Tight Hardness Results for Training Depth-2 ReLU Networks
Surbhi Goel
Adam R. Klivans
Pasin Manurangsi
Daniel Reichman
46
41
0
27 Nov 2020
Learning Deep ReLU Networks Is Fixed-Parameter Tractable
Sitan Chen
Adam R. Klivans
Raghu Meka
49
37
0
28 Sep 2020
Provably Good Solutions to the Knapsack Problem via Neural Networks of Bounded Size
Christoph Hertrich
M. Skutella
69
21
0
28 May 2020
Approximation Schemes for ReLU Regression
Ilias Diakonikolas
Surbhi Goel
Sushrut Karmalkar
Adam R. Klivans
Mahdi Soltanolkotabi
45
51
0
26 May 2020
Complexity of Linear Regions in Deep Networks
Boris Hanin
David Rolnick
37
230
0
25 Jan 2019
A Convergence Theory for Deep Learning via Over-Parameterization
Zeyuan Allen-Zhu
Yuanzhi Li
Zhao Song
AI4CE
ODL
224
1,461
0
09 Nov 2018
Gradient Descent Finds Global Minima of Deep Neural Networks
S. Du
Jason D. Lee
Haochuan Li
Liwei Wang
Masayoshi Tomizuka
ODL
174
1,134
0
09 Nov 2018
Learning Two Layer Rectified Neural Networks in Polynomial Time
Ainesh Bakshi
Rajesh Jayaram
David P. Woodruff
NoLa
136
69
0
05 Nov 2018
Small ReLU networks are powerful memorizers: a tight analysis of memorization capacity
Chulhee Yun
S. Sra
Ali Jadbabaie
84
118
0
17 Oct 2018
Principled Deep Neural Network Training through Linear Programming
D. Bienstock
Gonzalo Muñoz
Sebastian Pokutta
42
24
0
07 Oct 2018
Complexity of Training ReLU Neural Network
Digvijay Boob
Santanu S. Dey
Guanghui Lan
51
74
0
27 Sep 2018
Neural Tangent Kernel: Convergence and Generalization in Neural Networks
Arthur Jacot
Franck Gabriel
Clément Hongler
238
3,191
0
20 Jun 2018
Tropical Geometry of Deep Neural Networks
Liwen Zhang
Gregory Naitzat
Lek-Heng Lim
69
138
0
18 May 2018
Regularisation of Neural Networks by Enforcing Lipschitz Continuity
Henry Gouk
E. Frank
Bernhard Pfahringer
M. Cree
147
475
0
12 Apr 2018
Neural Networks Should Be Wide Enough to Learn Disconnected Decision Regions
Quynh N. Nguyen
Mahesh Chandra Mukkamala
Matthias Hein
MLT
63
56
0
28 Feb 2018
Learning One Convolutional Layer with Overlapping Patches
Surbhi Goel
Adam R. Klivans
Raghu Meka
MLT
71
80
0
07 Feb 2018
Lower bounds over Boolean inputs for deep neural networks with ReLU gates
Anirbit Mukherjee
A. Basu
54
21
0
08 Nov 2017
Bounding and Counting Linear Regions of Deep Neural Networks
Thiago Serra
Christian Tjandraatmadja
Srikumar Ramalingam
MLT
59
249
0
06 Nov 2017
Approximating Continuous Functions by ReLU Nets of Minimal Width
Boris Hanin
Mark Sellke
103
234
0
31 Oct 2017
Universal Function Approximation by Deep Neural Nets with Bounded Width and ReLU Activations
Boris Hanin
55
353
0
09 Aug 2017
Gradient Descent Can Take Exponential Time to Escape Saddle Points
S. Du
Chi Jin
Jason D. Lee
Michael I. Jordan
Barnabás Póczós
Aarti Singh
54
244
0
29 May 2017
Globally Optimal Gradient Descent for a ConvNet with Gaussian Inputs
Alon Brutzkus
Amir Globerson
MLT
145
313
0
26 Feb 2017
Reliably Learning the ReLU in Polynomial Time
Surbhi Goel
Varun Kanade
Adam R. Klivans
J. Thaler
78
126
0
30 Nov 2016
Understanding Deep Neural Networks with Rectified Linear Units
R. Arora
A. Basu
Poorya Mianjy
Anirbit Mukherjee
PINN
143
640
0
04 Nov 2016
Depth-Width Tradeoffs in Approximating Natural Functions with Neural Networks
Itay Safran
Ohad Shamir
80
174
0
31 Oct 2016
Why Deep Neural Networks for Function Approximation?
Shiyu Liang
R. Srikant
107
385
0
13 Oct 2016
Error bounds for approximations with deep ReLU networks
Dmitry Yarotsky
175
1,226
0
03 Oct 2016
On the Expressive Power of Deep Neural Networks
M. Raghu
Ben Poole
Jon M. Kleinberg
Surya Ganguli
Jascha Narain Sohl-Dickstein
61
786
0
16 Jun 2016
On Restricted Nonnegative Matrix Factorization
D. Chistikov
S. Kiefer
Ines Marusic
M. Shirmohammadi
J. Worrell
59
14
0
23 May 2016
Benefits of depth in neural networks
Matus Telgarsky
330
608
0
14 Feb 2016
The Power of Depth for Feedforward Neural Networks
Ronen Eldan
Ohad Shamir
195
732
0
12 Dec 2015
Representation Benefits of Deep Feedforward Networks
Matus Telgarsky
76
242
0
27 Sep 2015
On the Number of Linear Regions of Deep Neural Networks
Guido Montúfar
Razvan Pascanu
Kyunghyun Cho
Yoshua Bengio
88
1,254
0
08 Feb 2014
On the number of response regions of deep feed forward networks with piece-wise linear activations
Razvan Pascanu
Guido Montúfar
Yoshua Bengio
FAtt
111
257
0
20 Dec 2013
1