Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2011.13550
Cited By
Tight Hardness Results for Training Depth-2 ReLU Networks
27 November 2020
Surbhi Goel
Adam R. Klivans
Pasin Manurangsi
Daniel Reichman
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Tight Hardness Results for Training Depth-2 ReLU Networks"
7 / 7 papers shown
Title
Complexity of Neural Network Training and ETR: Extensions with Effectively Continuous Functions
Teemu Hankala
Miika Hannula
J. Kontinen
Jonni Virtema
19
6
0
19 May 2023
When Deep Learning Meets Polyhedral Theory: A Survey
Joey Huchette
Gonzalo Muñoz
Thiago Serra
Calvin Tsay
AI4CE
91
32
0
29 Apr 2023
Lower Bounds on the Depth of Integral ReLU Neural Networks via Lattice Polytopes
Christian Haase
Christoph Hertrich
Georg Loho
26
21
0
24 Feb 2023
Training Fully Connected Neural Networks is
∃
R
\exists\mathbb{R}
∃
R
-Complete
Daniel Bertschinger
Christoph Hertrich
Paul Jungeblut
Tillmann Miltzow
Simon Weber
OffRL
54
30
0
04 Apr 2022
Neural networks with linear threshold activations: structure and algorithms
Sammy Khalife
Hongyu Cheng
A. Basu
34
14
0
15 Nov 2021
Path Regularization: A Convexity and Sparsity Inducing Regularization for Parallel ReLU Networks
Tolga Ergen
Mert Pilanci
24
16
0
18 Oct 2021
From Local Pseudorandom Generators to Hardness of Learning
Amit Daniely
Gal Vardi
109
30
0
20 Jan 2021
1