Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2006.12011
Cited By
Superpolynomial Lower Bounds for Learning One-Layer Neural Networks using Gradient Descent
22 June 2020
Surbhi Goel
Aravind Gollakota
Zhihan Jin
Sushrut Karmalkar
Adam R. Klivans
MLT
ODL
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Superpolynomial Lower Bounds for Learning One-Layer Neural Networks using Gradient Descent"
23 / 23 papers shown
Title
Low-dimensional Functions are Efficiently Learnable under Randomly Biased Distributions
Elisabetta Cornacchia
Dan Mikulincer
Elchanan Mossel
77
1
0
10 Feb 2025
Exploration is Harder than Prediction: Cryptographically Separating Reinforcement Learning from Supervised Learning
Noah Golowich
Ankur Moitra
Dhruv Rohatgi
OffRL
35
4
0
04 Apr 2024
On Single Index Models beyond Gaussian Data
Joan Bruna
Loucas Pillaud-Vivien
Aaron Zweig
18
10
0
28 Jul 2023
Efficiently Learning One-Hidden-Layer ReLU Networks via Schur Polynomials
Ilias Diakonikolas
D. Kane
32
4
0
24 Jul 2023
A faster and simpler algorithm for learning shallow networks
Sitan Chen
Shyam Narayanan
41
7
0
24 Jul 2023
Smoothing the Landscape Boosts the Signal for SGD: Optimal Sample Complexity for Learning Single Index Models
Alexandru Damian
Eshaan Nichani
Rong Ge
Jason D. Lee
MLT
42
33
0
18 May 2023
Computational Complexity of Learning Neural Networks: Smoothness and Degeneracy
Amit Daniely
Nathan Srebro
Gal Vardi
33
4
0
15 Feb 2023
Learning Single-Index Models with Shallow Neural Networks
A. Bietti
Joan Bruna
Clayton Sanford
M. Song
170
68
0
27 Oct 2022
Global Convergence of SGD On Two Layer Neural Nets
Pulkit Gopalani
Anirbit Mukherjee
26
5
0
20 Oct 2022
Neural Networks can Learn Representations with Gradient Descent
Alexandru Damian
Jason D. Lee
Mahdi Soltanolkotabi
SSL
MLT
25
114
0
30 Jun 2022
Learning ReLU networks to high uniform accuracy is intractable
Julius Berner
Philipp Grohs
F. Voigtlaender
32
4
0
26 May 2022
Hardness of Noise-Free Learning for Two-Hidden-Layer Neural Networks
Sitan Chen
Aravind Gollakota
Adam R. Klivans
Raghu Meka
24
30
0
10 Feb 2022
Lattice-Based Methods Surpass Sum-of-Squares in Clustering
Ilias Zadik
M. Song
Alexander S. Wein
Joan Bruna
17
35
0
07 Dec 2021
Efficiently Learning Any One Hidden Layer ReLU Network From Queries
Sitan Chen
Adam R. Klivans
Raghu Meka
MLAU
MLT
45
8
0
08 Nov 2021
The Connection Between Approximation, Depth Separation and Learnability in Neural Networks
Eran Malach
Gilad Yehudai
Shai Shalev-Shwartz
Ohad Shamir
21
20
0
31 Jan 2021
From Local Pseudorandom Generators to Hardness of Learning
Amit Daniely
Gal Vardi
109
30
0
20 Jan 2021
Provable Generalization of SGD-trained Neural Networks of Any Width in the Presence of Adversarial Label Noise
Spencer Frei
Yuan Cao
Quanquan Gu
FedML
MLT
64
19
0
04 Jan 2021
Achieving Adversarial Robustness Requires An Active Teacher
Chao Ma
Lexing Ying
27
1
0
14 Dec 2020
On InstaHide, Phase Retrieval, and Sparse Matrix Factorization
Sitan Chen
Xiaoxiao Li
Zhao Song
Danyang Zhuo
27
13
0
23 Nov 2020
Learning Deep ReLU Networks Is Fixed-Parameter Tractable
Sitan Chen
Adam R. Klivans
Raghu Meka
22
36
0
28 Sep 2020
Statistical Query Algorithms and Low-Degree Tests Are Almost Equivalent
Matthew Brennan
Guy Bresler
Samuel B. Hopkins
Jingkai Li
T. Schramm
19
62
0
13 Sep 2020
Deep Networks and the Multiple Manifold Problem
Sam Buchanan
D. Gilboa
John N. Wright
166
39
0
25 Aug 2020
Near-Optimal SQ Lower Bounds for Agnostically Learning Halfspaces and ReLUs under Gaussian Marginals
Ilias Diakonikolas
D. Kane
Nikos Zarifis
19
66
0
29 Jun 2020
1