ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.12011
  4. Cited By
Superpolynomial Lower Bounds for Learning One-Layer Neural Networks
  using Gradient Descent

Superpolynomial Lower Bounds for Learning One-Layer Neural Networks using Gradient Descent

22 June 2020
Surbhi Goel
Aravind Gollakota
Zhihan Jin
Sushrut Karmalkar
Adam R. Klivans
    MLT
    ODL
ArXivPDFHTML

Papers citing "Superpolynomial Lower Bounds for Learning One-Layer Neural Networks using Gradient Descent"

23 / 23 papers shown
Title
Low-dimensional Functions are Efficiently Learnable under Randomly Biased Distributions
Elisabetta Cornacchia
Dan Mikulincer
Elchanan Mossel
77
1
0
10 Feb 2025
Exploration is Harder than Prediction: Cryptographically Separating
  Reinforcement Learning from Supervised Learning
Exploration is Harder than Prediction: Cryptographically Separating Reinforcement Learning from Supervised Learning
Noah Golowich
Ankur Moitra
Dhruv Rohatgi
OffRL
35
4
0
04 Apr 2024
On Single Index Models beyond Gaussian Data
On Single Index Models beyond Gaussian Data
Joan Bruna
Loucas Pillaud-Vivien
Aaron Zweig
18
10
0
28 Jul 2023
Efficiently Learning One-Hidden-Layer ReLU Networks via Schur
  Polynomials
Efficiently Learning One-Hidden-Layer ReLU Networks via Schur Polynomials
Ilias Diakonikolas
D. Kane
32
4
0
24 Jul 2023
A faster and simpler algorithm for learning shallow networks
A faster and simpler algorithm for learning shallow networks
Sitan Chen
Shyam Narayanan
41
7
0
24 Jul 2023
Smoothing the Landscape Boosts the Signal for SGD: Optimal Sample
  Complexity for Learning Single Index Models
Smoothing the Landscape Boosts the Signal for SGD: Optimal Sample Complexity for Learning Single Index Models
Alexandru Damian
Eshaan Nichani
Rong Ge
Jason D. Lee
MLT
42
33
0
18 May 2023
Computational Complexity of Learning Neural Networks: Smoothness and
  Degeneracy
Computational Complexity of Learning Neural Networks: Smoothness and Degeneracy
Amit Daniely
Nathan Srebro
Gal Vardi
33
4
0
15 Feb 2023
Learning Single-Index Models with Shallow Neural Networks
Learning Single-Index Models with Shallow Neural Networks
A. Bietti
Joan Bruna
Clayton Sanford
M. Song
170
68
0
27 Oct 2022
Global Convergence of SGD On Two Layer Neural Nets
Global Convergence of SGD On Two Layer Neural Nets
Pulkit Gopalani
Anirbit Mukherjee
26
5
0
20 Oct 2022
Neural Networks can Learn Representations with Gradient Descent
Neural Networks can Learn Representations with Gradient Descent
Alexandru Damian
Jason D. Lee
Mahdi Soltanolkotabi
SSL
MLT
25
114
0
30 Jun 2022
Learning ReLU networks to high uniform accuracy is intractable
Learning ReLU networks to high uniform accuracy is intractable
Julius Berner
Philipp Grohs
F. Voigtlaender
32
4
0
26 May 2022
Hardness of Noise-Free Learning for Two-Hidden-Layer Neural Networks
Hardness of Noise-Free Learning for Two-Hidden-Layer Neural Networks
Sitan Chen
Aravind Gollakota
Adam R. Klivans
Raghu Meka
24
30
0
10 Feb 2022
Lattice-Based Methods Surpass Sum-of-Squares in Clustering
Lattice-Based Methods Surpass Sum-of-Squares in Clustering
Ilias Zadik
M. Song
Alexander S. Wein
Joan Bruna
17
35
0
07 Dec 2021
Efficiently Learning Any One Hidden Layer ReLU Network From Queries
Efficiently Learning Any One Hidden Layer ReLU Network From Queries
Sitan Chen
Adam R. Klivans
Raghu Meka
MLAU
MLT
45
8
0
08 Nov 2021
The Connection Between Approximation, Depth Separation and Learnability
  in Neural Networks
The Connection Between Approximation, Depth Separation and Learnability in Neural Networks
Eran Malach
Gilad Yehudai
Shai Shalev-Shwartz
Ohad Shamir
21
20
0
31 Jan 2021
From Local Pseudorandom Generators to Hardness of Learning
From Local Pseudorandom Generators to Hardness of Learning
Amit Daniely
Gal Vardi
109
30
0
20 Jan 2021
Provable Generalization of SGD-trained Neural Networks of Any Width in
  the Presence of Adversarial Label Noise
Provable Generalization of SGD-trained Neural Networks of Any Width in the Presence of Adversarial Label Noise
Spencer Frei
Yuan Cao
Quanquan Gu
FedML
MLT
64
19
0
04 Jan 2021
Achieving Adversarial Robustness Requires An Active Teacher
Achieving Adversarial Robustness Requires An Active Teacher
Chao Ma
Lexing Ying
27
1
0
14 Dec 2020
On InstaHide, Phase Retrieval, and Sparse Matrix Factorization
On InstaHide, Phase Retrieval, and Sparse Matrix Factorization
Sitan Chen
Xiaoxiao Li
Zhao Song
Danyang Zhuo
27
13
0
23 Nov 2020
Learning Deep ReLU Networks Is Fixed-Parameter Tractable
Learning Deep ReLU Networks Is Fixed-Parameter Tractable
Sitan Chen
Adam R. Klivans
Raghu Meka
22
36
0
28 Sep 2020
Statistical Query Algorithms and Low-Degree Tests Are Almost Equivalent
Statistical Query Algorithms and Low-Degree Tests Are Almost Equivalent
Matthew Brennan
Guy Bresler
Samuel B. Hopkins
Jingkai Li
T. Schramm
19
62
0
13 Sep 2020
Deep Networks and the Multiple Manifold Problem
Deep Networks and the Multiple Manifold Problem
Sam Buchanan
D. Gilboa
John N. Wright
166
39
0
25 Aug 2020
Near-Optimal SQ Lower Bounds for Agnostically Learning Halfspaces and
  ReLUs under Gaussian Marginals
Near-Optimal SQ Lower Bounds for Agnostically Learning Halfspaces and ReLUs under Gaussian Marginals
Ilias Diakonikolas
D. Kane
Nikos Zarifis
19
66
0
29 Jun 2020
1