ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2212.13848
  4. Cited By
Learning Lipschitz Functions by GD-trained Shallow Overparameterized
  ReLU Neural Networks

Learning Lipschitz Functions by GD-trained Shallow Overparameterized ReLU Neural Networks

28 December 2022
Ilja Kuzborskij
Csaba Szepesvári
ArXivPDFHTML

Papers citing "Learning Lipschitz Functions by GD-trained Shallow Overparameterized ReLU Neural Networks"

32 / 32 papers shown
Title
Improved Convergence Guarantees for Shallow Neural Networks
Improved Convergence Guarantees for Shallow Neural Networks
A. Razborov
ODL
51
1
0
05 Dec 2022
Learning Single-Index Models with Shallow Neural Networks
Learning Single-Index Models with Shallow Neural Networks
A. Bietti
Joan Bruna
Clayton Sanford
M. Song
179
68
0
27 Oct 2022
Neural Networks can Learn Representations with Gradient Descent
Neural Networks can Learn Representations with Gradient Descent
Alexandru Damian
Jason D. Lee
Mahdi Soltanolkotabi
SSL
MLT
74
115
0
30 Jun 2022
Deep Network Approximation in Terms of Intrinsic Parameters
Deep Network Approximation in Terms of Intrinsic Parameters
Zuowei Shen
Haizhao Yang
Shijun Zhang
38
9
0
15 Nov 2021
Subquadratic Overparameterization for Shallow Neural Networks
Subquadratic Overparameterization for Shallow Neural Networks
Chaehwan Song
Ali Ramezani-Kebrya
Thomas Pethick
Armin Eftekhari
Volkan Cevher
56
31
0
02 Nov 2021
A spectral-based analysis of the separation between two-layer neural
  networks and linear methods
A spectral-based analysis of the separation between two-layer neural networks and linear methods
Lei Wu
Jihao Long
99
8
0
10 Aug 2021
Deep learning: a statistical viewpoint
Deep learning: a statistical viewpoint
Peter L. Bartlett
Andrea Montanari
Alexander Rakhlin
27
272
0
16 Mar 2021
Online nonparametric regression with Sobolev kernels
Online nonparametric regression with Sobolev kernels
O. Zadorozhnyi
Pierre Gaillard
Sébastien Gerchinovitz
Alessandro Rudi
26
4
0
06 Feb 2021
On the Proof of Global Convergence of Gradient Descent for Deep ReLU
  Networks with Linear Widths
On the Proof of Global Convergence of Gradient Descent for Deep ReLU Networks with Linear Widths
Quynh N. Nguyen
60
48
0
24 Jan 2021
Tight Bounds on the Smallest Eigenvalue of the Neural Tangent Kernel for
  Deep ReLU Networks
Tight Bounds on the Smallest Eigenvalue of the Neural Tangent Kernel for Deep ReLU Networks
Quynh N. Nguyen
Marco Mondelli
Guido Montúfar
40
82
0
21 Dec 2020
Regularization Matters: A Nonparametric Perspective on Overparametrized
  Neural Network
Regularization Matters: A Nonparametric Perspective on Overparametrized Neural Network
Tianyang Hu
Wei Cao
Cong Lin
Guang Cheng
100
52
0
06 Jul 2020
Analyzing the discrepancy principle for kernelized spectral filter
  learning algorithms
Analyzing the discrepancy principle for kernelized spectral filter learning algorithms
Alain Celisse
Martin Wahl
30
18
0
17 Apr 2020
Over-parametrized deep neural networks do not generalize well
Over-parametrized deep neural networks do not generalize well
Michael Kohler
A. Krzyżak
28
12
0
09 Dec 2019
Polylogarithmic width suffices for gradient descent to achieve
  arbitrarily small test error with shallow ReLU networks
Polylogarithmic width suffices for gradient descent to achieve arbitrarily small test error with shallow ReLU networks
Ziwei Ji
Matus Telgarsky
50
177
0
26 Sep 2019
On the Inductive Bias of Neural Tangent Kernels
On the Inductive Bias of Neural Tangent Kernels
A. Bietti
Julien Mairal
49
255
0
29 May 2019
Towards moderate overparameterization: global convergence guarantees for
  training shallow neural networks
Towards moderate overparameterization: global convergence guarantees for training shallow neural networks
Samet Oymak
Mahdi Soltanolkotabi
43
320
0
12 Feb 2019
Fine-Grained Analysis of Optimization and Generalization for
  Overparameterized Two-Layer Neural Networks
Fine-Grained Analysis of Optimization and Generalization for Overparameterized Two-Layer Neural Networks
Sanjeev Arora
S. Du
Wei Hu
Zhiyuan Li
Ruosong Wang
MLT
108
966
0
24 Jan 2019
Consistency of Interpolation with Laplace Kernels is a High-Dimensional
  Phenomenon
Consistency of Interpolation with Laplace Kernels is a High-Dimensional Phenomenon
Alexander Rakhlin
Xiyu Zhai
95
79
0
28 Dec 2018
Learning and Generalization in Overparameterized Neural Networks, Going
  Beyond Two Layers
Learning and Generalization in Overparameterized Neural Networks, Going Beyond Two Layers
Zeyuan Allen-Zhu
Yuanzhi Li
Yingyu Liang
MLT
106
769
0
12 Nov 2018
A Convergence Theory for Deep Learning via Over-Parameterization
A Convergence Theory for Deep Learning via Over-Parameterization
Zeyuan Allen-Zhu
Yuanzhi Li
Zhao Song
AI4CE
ODL
170
1,457
0
09 Nov 2018
Gradient Descent Provably Optimizes Over-parameterized Neural Networks
Gradient Descent Provably Optimizes Over-parameterized Neural Networks
S. Du
Xiyu Zhai
Barnabás Póczós
Aarti Singh
MLT
ODL
130
1,261
0
04 Oct 2018
Does data interpolation contradict statistical optimality?
Does data interpolation contradict statistical optimality?
M. Belkin
Alexander Rakhlin
Alexandre B. Tsybakov
60
218
0
25 Jun 2018
Neural Tangent Kernel: Convergence and Generalization in Neural Networks
Neural Tangent Kernel: Convergence and Generalization in Neural Networks
Arthur Jacot
Franck Gabriel
Clément Hongler
159
3,160
0
20 Jun 2018
Statistical Optimality of Stochastic Gradient Descent on Hard Learning
  Problems through Multiple Passes
Statistical Optimality of Stochastic Gradient Descent on Hard Learning Problems through Multiple Passes
Loucas Pillaud-Vivien
Alessandro Rudi
Francis R. Bach
71
100
0
25 May 2018
Size-Independent Sample Complexity of Neural Networks
Size-Independent Sample Complexity of Neural Networks
Noah Golowich
Alexander Rakhlin
Ohad Shamir
79
547
0
18 Dec 2017
Spectrally-normalized margin bounds for neural networks
Spectrally-normalized margin bounds for neural networks
Peter L. Bartlett
Dylan J. Foster
Matus Telgarsky
ODL
120
1,208
0
26 Jun 2017
Toward Deeper Understanding of Neural Networks: The Power of
  Initialization and a Dual View on Expressivity
Toward Deeper Understanding of Neural Networks: The Power of Initialization and a Dual View on Expressivity
Amit Daniely
Roy Frostig
Y. Singer
77
343
0
18 Feb 2016
Generalization Properties of Learning with Random Features
Generalization Properties of Learning with Random Features
Alessandro Rudi
Lorenzo Rosasco
MLT
58
329
0
14 Feb 2016
Breaking the Curse of Dimensionality with Convex Neural Networks
Breaking the Curse of Dimensionality with Convex Neural Networks
Francis R. Bach
102
701
0
30 Dec 2014
Non-parametric Stochastic Approximation with Large Step sizes
Non-parametric Stochastic Approximation with Large Step sizes
Aymeric Dieuleveut
Francis R. Bach
39
169
0
02 Aug 2014
Early stopping and non-parametric regression: An optimal data-dependent
  stopping rule
Early stopping and non-parametric regression: An optimal data-dependent stopping rule
Garvesh Raskutti
Martin J. Wainwright
Bin Yu
57
299
0
15 Jun 2013
Optimistic Rates for Learning with a Smooth Loss
Optimistic Rates for Learning with a Smooth Loss
Nathan Srebro
Karthik Sridharan
Ambuj Tewari
124
281
0
20 Sep 2010
1