ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1811.01885
  4. Cited By
Learning Two Layer Rectified Neural Networks in Polynomial Time

Learning Two Layer Rectified Neural Networks in Polynomial Time

5 November 2018
Ainesh Bakshi
Rajesh Jayaram
David P. Woodruff
    NoLa
ArXivPDFHTML

Papers citing "Learning Two Layer Rectified Neural Networks in Polynomial Time"

19 / 19 papers shown
Title
Efficiently Learning One-Hidden-Layer ReLU Networks via Schur
  Polynomials
Efficiently Learning One-Hidden-Layer ReLU Networks via Schur Polynomials
Ilias Diakonikolas
D. Kane
32
4
0
24 Jul 2023
A faster and simpler algorithm for learning shallow networks
A faster and simpler algorithm for learning shallow networks
Sitan Chen
Shyam Narayanan
41
7
0
24 Jul 2023
Computational Complexity of Learning Neural Networks: Smoothness and
  Degeneracy
Computational Complexity of Learning Neural Networks: Smoothness and Degeneracy
Amit Daniely
Nathan Srebro
Gal Vardi
33
4
0
15 Feb 2023
Bounding the Width of Neural Networks via Coupled Initialization -- A
  Worst Case Analysis
Bounding the Width of Neural Networks via Coupled Initialization -- A Worst Case Analysis
Alexander Munteanu
Simon Omlor
Zhao Song
David P. Woodruff
33
15
0
26 Jun 2022
Training Fully Connected Neural Networks is $\exists\mathbb{R}$-Complete
Training Fully Connected Neural Networks is ∃R\exists\mathbb{R}∃R-Complete
Daniel Bertschinger
Christoph Hertrich
Paul Jungeblut
Tillmann Miltzow
Simon Weber
OffRL
61
30
0
04 Apr 2022
Hardness of Noise-Free Learning for Two-Hidden-Layer Neural Networks
Hardness of Noise-Free Learning for Two-Hidden-Layer Neural Networks
Sitan Chen
Aravind Gollakota
Adam R. Klivans
Raghu Meka
24
30
0
10 Feb 2022
How does unlabeled data improve generalization in self-training? A
  one-hidden-layer theoretical analysis
How does unlabeled data improve generalization in self-training? A one-hidden-layer theoretical analysis
Shuai Zhang
Ming Wang
Sijia Liu
Pin-Yu Chen
Jinjun Xiong
SSL
MLT
41
22
0
21 Jan 2022
Efficiently Learning Any One Hidden Layer ReLU Network From Queries
Efficiently Learning Any One Hidden Layer ReLU Network From Queries
Sitan Chen
Adam R. Klivans
Raghu Meka
MLAU
MLT
45
8
0
08 Nov 2021
From Local Pseudorandom Generators to Hardness of Learning
From Local Pseudorandom Generators to Hardness of Learning
Amit Daniely
Gal Vardi
109
30
0
20 Jan 2021
Towards Understanding Ensemble, Knowledge Distillation and
  Self-Distillation in Deep Learning
Towards Understanding Ensemble, Knowledge Distillation and Self-Distillation in Deep Learning
Zeyuan Allen-Zhu
Yuanzhi Li
FedML
60
356
0
17 Dec 2020
Small Covers for Near-Zero Sets of Polynomials and Learning Latent
  Variable Models
Small Covers for Near-Zero Sets of Polynomials and Learning Latent Variable Models
Ilias Diakonikolas
D. Kane
21
32
0
14 Dec 2020
Quantum-Inspired Algorithms from Randomized Numerical Linear Algebra
Quantum-Inspired Algorithms from Randomized Numerical Linear Algebra
Nadiia Chepurko
K. Clarkson
L. Horesh
Honghao Lin
David P. Woodruff
24
24
0
09 Nov 2020
Learning Deep ReLU Networks Is Fixed-Parameter Tractable
Learning Deep ReLU Networks Is Fixed-Parameter Tractable
Sitan Chen
Adam R. Klivans
Raghu Meka
22
36
0
28 Sep 2020
Generalized Leverage Score Sampling for Neural Networks
Generalized Leverage Score Sampling for Neural Networks
J. Lee
Ruoqi Shen
Zhao Song
Mengdi Wang
Zheng Yu
21
42
0
21 Sep 2020
Training (Overparametrized) Neural Networks in Near-Linear Time
Training (Overparametrized) Neural Networks in Near-Linear Time
Jan van den Brand
Binghui Peng
Zhao Song
Omri Weinstein
ODL
29
82
0
20 Jun 2020
Feature Purification: How Adversarial Training Performs Robust Deep
  Learning
Feature Purification: How Adversarial Training Performs Robust Deep Learning
Zeyuan Allen-Zhu
Yuanzhi Li
MLT
AAML
37
147
0
20 May 2020
What Can ResNet Learn Efficiently, Going Beyond Kernels?
What Can ResNet Learn Efficiently, Going Beyond Kernels?
Zeyuan Allen-Zhu
Yuanzhi Li
24
183
0
24 May 2019
Analysis of a Two-Layer Neural Network via Displacement Convexity
Analysis of a Two-Layer Neural Network via Displacement Convexity
Adel Javanmard
Marco Mondelli
Andrea Montanari
MLT
48
57
0
05 Jan 2019
Learning and Generalization in Overparameterized Neural Networks, Going
  Beyond Two Layers
Learning and Generalization in Overparameterized Neural Networks, Going Beyond Two Layers
Zeyuan Allen-Zhu
Yuanzhi Li
Yingyu Liang
MLT
23
765
0
12 Nov 2018
1