Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1811.01885
Cited By
Learning Two Layer Rectified Neural Networks in Polynomial Time
5 November 2018
Ainesh Bakshi
Rajesh Jayaram
David P. Woodruff
NoLa
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Learning Two Layer Rectified Neural Networks in Polynomial Time"
19 / 19 papers shown
Title
Efficiently Learning One-Hidden-Layer ReLU Networks via Schur Polynomials
Ilias Diakonikolas
D. Kane
32
4
0
24 Jul 2023
A faster and simpler algorithm for learning shallow networks
Sitan Chen
Shyam Narayanan
41
7
0
24 Jul 2023
Computational Complexity of Learning Neural Networks: Smoothness and Degeneracy
Amit Daniely
Nathan Srebro
Gal Vardi
33
4
0
15 Feb 2023
Bounding the Width of Neural Networks via Coupled Initialization -- A Worst Case Analysis
Alexander Munteanu
Simon Omlor
Zhao Song
David P. Woodruff
33
15
0
26 Jun 2022
Training Fully Connected Neural Networks is
∃
R
\exists\mathbb{R}
∃
R
-Complete
Daniel Bertschinger
Christoph Hertrich
Paul Jungeblut
Tillmann Miltzow
Simon Weber
OffRL
61
30
0
04 Apr 2022
Hardness of Noise-Free Learning for Two-Hidden-Layer Neural Networks
Sitan Chen
Aravind Gollakota
Adam R. Klivans
Raghu Meka
24
30
0
10 Feb 2022
How does unlabeled data improve generalization in self-training? A one-hidden-layer theoretical analysis
Shuai Zhang
Ming Wang
Sijia Liu
Pin-Yu Chen
Jinjun Xiong
SSL
MLT
41
22
0
21 Jan 2022
Efficiently Learning Any One Hidden Layer ReLU Network From Queries
Sitan Chen
Adam R. Klivans
Raghu Meka
MLAU
MLT
45
8
0
08 Nov 2021
From Local Pseudorandom Generators to Hardness of Learning
Amit Daniely
Gal Vardi
109
30
0
20 Jan 2021
Towards Understanding Ensemble, Knowledge Distillation and Self-Distillation in Deep Learning
Zeyuan Allen-Zhu
Yuanzhi Li
FedML
60
356
0
17 Dec 2020
Small Covers for Near-Zero Sets of Polynomials and Learning Latent Variable Models
Ilias Diakonikolas
D. Kane
21
32
0
14 Dec 2020
Quantum-Inspired Algorithms from Randomized Numerical Linear Algebra
Nadiia Chepurko
K. Clarkson
L. Horesh
Honghao Lin
David P. Woodruff
24
24
0
09 Nov 2020
Learning Deep ReLU Networks Is Fixed-Parameter Tractable
Sitan Chen
Adam R. Klivans
Raghu Meka
22
36
0
28 Sep 2020
Generalized Leverage Score Sampling for Neural Networks
J. Lee
Ruoqi Shen
Zhao Song
Mengdi Wang
Zheng Yu
21
42
0
21 Sep 2020
Training (Overparametrized) Neural Networks in Near-Linear Time
Jan van den Brand
Binghui Peng
Zhao Song
Omri Weinstein
ODL
29
82
0
20 Jun 2020
Feature Purification: How Adversarial Training Performs Robust Deep Learning
Zeyuan Allen-Zhu
Yuanzhi Li
MLT
AAML
37
147
0
20 May 2020
What Can ResNet Learn Efficiently, Going Beyond Kernels?
Zeyuan Allen-Zhu
Yuanzhi Li
24
183
0
24 May 2019
Analysis of a Two-Layer Neural Network via Displacement Convexity
Adel Javanmard
Marco Mondelli
Andrea Montanari
MLT
48
57
0
05 Jan 2019
Learning and Generalization in Overparameterized Neural Networks, Going Beyond Two Layers
Zeyuan Allen-Zhu
Yuanzhi Li
Yingyu Liang
MLT
23
765
0
12 Nov 2018
1