ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2111.01875
  4. Cited By
Subquadratic Overparameterization for Shallow Neural Networks

Subquadratic Overparameterization for Shallow Neural Networks

2 November 2021
Chaehwan Song
Ali Ramezani-Kebrya
Thomas Pethick
Armin Eftekhari
V. Cevher
ArXivPDFHTML

Papers citing "Subquadratic Overparameterization for Shallow Neural Networks"

25 / 25 papers shown
Title
Feature Learning Beyond the Edge of Stability
Feature Learning Beyond the Edge of Stability
Dávid Terjék
MLT
46
0
0
18 Feb 2025
Loss Landscape Characterization of Neural Networks without
  Over-Parametrization
Loss Landscape Characterization of Neural Networks without Over-Parametrization
Rustem Islamov
Niccolò Ajroldi
Antonio Orvieto
Aurelien Lucchi
35
4
0
16 Oct 2024
Optimal Hessian/Jacobian-Free Nonconvex-PL Bilevel Optimization
Optimal Hessian/Jacobian-Free Nonconvex-PL Bilevel Optimization
Feihu Huang
50
4
0
25 Jul 2024
Approximation and Gradient Descent Training with Neural Networks
Approximation and Gradient Descent Training with Neural Networks
G. Welper
38
1
0
19 May 2024
Adaptive Mirror Descent Bilevel Optimization
Adaptive Mirror Descent Bilevel Optimization
Feihu Huang
37
1
0
08 Nov 2023
On the Convergence of Encoder-only Shallow Transformers
On the Convergence of Encoder-only Shallow Transformers
Yongtao Wu
Fanghui Liu
Grigorios G. Chrysos
V. Cevher
42
5
0
02 Nov 2023
Global Convergence of SGD For Logistic Loss on Two Layer Neural Nets
Global Convergence of SGD For Logistic Loss on Two Layer Neural Nets
Pulkit Gopalani
Samyak Jha
Anirbit Mukherjee
19
2
0
17 Sep 2023
Approximation Results for Gradient Descent trained Neural Networks
Approximation Results for Gradient Descent trained Neural Networks
G. Welper
48
0
0
09 Sep 2023
On Penalty Methods for Nonconvex Bilevel Optimization and First-Order
  Stochastic Approximation
On Penalty Methods for Nonconvex Bilevel Optimization and First-Order Stochastic Approximation
Jeongyeol Kwon
Dohyun Kwon
Steve Wright
Robert D. Nowak
31
25
0
04 Sep 2023
Mildly Overparameterized ReLU Networks Have a Favorable Loss Landscape
Mildly Overparameterized ReLU Networks Have a Favorable Loss Landscape
Kedar Karhadkar
Michael Murray
Hanna Tseran
Guido Montúfar
16
6
0
31 May 2023
On Momentum-Based Gradient Methods for Bilevel Optimization with
  Nonconvex Lower-Level
On Momentum-Based Gradient Methods for Bilevel Optimization with Nonconvex Lower-Level
Feihu Huang
27
18
0
07 Mar 2023
Learning Lipschitz Functions by GD-trained Shallow Overparameterized
  ReLU Neural Networks
Learning Lipschitz Functions by GD-trained Shallow Overparameterized ReLU Neural Networks
Ilja Kuzborskij
Csaba Szepesvári
21
4
0
28 Dec 2022
Improved Convergence Guarantees for Shallow Neural Networks
Improved Convergence Guarantees for Shallow Neural Networks
A. Razborov
ODL
27
1
0
05 Dec 2022
Finite Sample Identification of Wide Shallow Neural Networks with Biases
Finite Sample Identification of Wide Shallow Neural Networks with Biases
M. Fornasier
T. Klock
Marco Mondelli
Michael Rauchensteiner
19
6
0
08 Nov 2022
Optimization for Amortized Inverse Problems
Optimization for Amortized Inverse Problems
Tianci Liu
Tong Yang
Quan Zhang
Qi Lei
36
5
0
25 Oct 2022
Global Convergence of SGD On Two Layer Neural Nets
Global Convergence of SGD On Two Layer Neural Nets
Pulkit Gopalani
Anirbit Mukherjee
26
5
0
20 Oct 2022
BOME! Bilevel Optimization Made Easy: A Simple First-Order Approach
BOME! Bilevel Optimization Made Easy: A Simple First-Order Approach
Mao Ye
B. Liu
S. Wright
Peter Stone
Qian Liu
72
82
0
19 Sep 2022
Approximation results for Gradient Descent trained Shallow Neural
  Networks in $1d$
Approximation results for Gradient Descent trained Shallow Neural Networks in 1d1d1d
R. Gentile
G. Welper
ODL
52
6
0
17 Sep 2022
Robustness in deep learning: The good (width), the bad (depth), and the
  ugly (initialization)
Robustness in deep learning: The good (width), the bad (depth), and the ugly (initialization)
Zhenyu Zhu
Fanghui Liu
Grigorios G. Chrysos
V. Cevher
39
19
0
15 Sep 2022
Informed Learning by Wide Neural Networks: Convergence, Generalization
  and Sampling Complexity
Informed Learning by Wide Neural Networks: Convergence, Generalization and Sampling Complexity
Jianyi Yang
Shaolei Ren
26
3
0
02 Jul 2022
A Framework for Overparameterized Learning
A Framework for Overparameterized Learning
Dávid Terjék
Diego González-Sánchez
MLT
16
1
0
26 May 2022
Memorization and Optimization in Deep Neural Networks with Minimum
  Over-parameterization
Memorization and Optimization in Deep Neural Networks with Minimum Over-parameterization
Simone Bombari
Mohammad Hossein Amani
Marco Mondelli
25
26
0
20 May 2022
Geometric Regularization from Overparameterization
Geometric Regularization from Overparameterization
Nicholas J. Teague
22
1
0
18 Feb 2022
On the Convergence of Shallow Neural Network Training with Randomly
  Masked Neurons
On the Convergence of Shallow Neural Network Training with Randomly Masked Neurons
Fangshuo Liao
Anastasios Kyrillidis
38
16
0
05 Dec 2021
How much pre-training is enough to discover a good subnetwork?
How much pre-training is enough to discover a good subnetwork?
Cameron R. Wolfe
Fangshuo Liao
Qihan Wang
J. Kim
Anastasios Kyrillidis
30
3
0
31 Jul 2021
1