ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2211.04589
  4. Cited By
Finite Sample Identification of Wide Shallow Neural Networks with Biases

Finite Sample Identification of Wide Shallow Neural Networks with Biases

8 November 2022
M. Fornasier
T. Klock
Marco Mondelli
Michael Rauchensteiner
ArXiv (abs)PDFHTML

Papers citing "Finite Sample Identification of Wide Shallow Neural Networks with Biases"

18 / 18 papers shown
Title
Memorization and Optimization in Deep Neural Networks with Minimum
  Over-parameterization
Memorization and Optimization in Deep Neural Networks with Minimum Over-parameterization
Simone Bombari
Mohammad Hossein Amani
Marco Mondelli
67
26
0
20 May 2022
Landscape analysis of an improved power method for tensor decomposition
Landscape analysis of an improved power method for tensor decomposition
Joe Kileel
T. Klock
João M. Pereira
55
10
0
29 Oct 2021
A Local Convergence Theory for Mildly Over-Parameterized Two-Layer
  Neural Network
A Local Convergence Theory for Mildly Over-Parameterized Two-Layer Neural Network
Mo Zhou
Rong Ge
Chi Jin
117
46
0
04 Feb 2021
On the Proof of Global Convergence of Gradient Descent for Deep ReLU
  Networks with Linear Widths
On the Proof of Global Convergence of Gradient Descent for Deep ReLU Networks with Linear Widths
Quynh N. Nguyen
104
48
0
24 Jan 2021
Network size and weights size for memorization with two-layers neural
  networks
Network size and weights size for memorization with two-layers neural networks
Sébastien Bubeck
Ronen Eldan
Y. Lee
Dan Mikulincer
60
33
0
04 Jun 2020
Learning deep linear neural networks: Riemannian gradient flows and
  convergence to global minimizers
Learning deep linear neural networks: Riemannian gradient flows and convergence to global minimizers
B. Bah
Holger Rauhut
Ulrich Terstiege
Michael Westdickenberg
MLT
34
66
0
12 Oct 2019
Kernel and Rich Regimes in Overparametrized Models
Blake E. Woodworth
Suriya Gunasekar
Pedro H. P. Savarese
E. Moroshko
Itay Golan
Jason D. Lee
Daniel Soudry
Nathan Srebro
80
364
0
13 Jun 2019
Quadratic Suffices for Over-parametrization via Matrix Chernoff Bound
Quadratic Suffices for Over-parametrization via Matrix Chernoff Bound
Zhao Song
Xin Yang
66
91
0
09 Jun 2019
Implicit Regularization in Deep Matrix Factorization
Implicit Regularization in Deep Matrix Factorization
Sanjeev Arora
Nadav Cohen
Wei Hu
Yuping Luo
AI4CE
85
509
0
31 May 2019
Global Convergence of Adaptive Gradient Methods for An
  Over-parameterized Neural Network
Global Convergence of Adaptive Gradient Methods for An Over-parameterized Neural Network
Xiaoxia Wu
S. Du
Rachel A. Ward
75
66
0
19 Feb 2019
Stochastic Gradient Descent Optimizes Over-parameterized Deep ReLU
  Networks
Stochastic Gradient Descent Optimizes Over-parameterized Deep ReLU Networks
Difan Zou
Yuan Cao
Dongruo Zhou
Quanquan Gu
ODL
190
448
0
21 Nov 2018
Gradient Descent Provably Optimizes Over-parameterized Neural Networks
Gradient Descent Provably Optimizes Over-parameterized Neural Networks
S. Du
Xiyu Zhai
Barnabás Póczós
Aarti Singh
MLTODL
227
1,275
0
04 Oct 2018
Theoretical insights into the optimization landscape of
  over-parameterized shallow neural networks
Theoretical insights into the optimization landscape of over-parameterized shallow neural networks
Mahdi Soltanolkotabi
Adel Javanmard
Jason D. Lee
175
422
0
16 Jul 2017
Recovery Guarantees for One-hidden-layer Neural Networks
Recovery Guarantees for One-hidden-layer Neural Networks
Kai Zhong
Zhao Song
Prateek Jain
Peter L. Bartlett
Inderjit S. Dhillon
MLT
175
337
0
10 Jun 2017
Understanding deep learning requires rethinking generalization
Understanding deep learning requires rethinking generalization
Chiyuan Zhang
Samy Bengio
Moritz Hardt
Benjamin Recht
Oriol Vinyals
HAI
345
4,636
0
10 Nov 2016
In Search of the Real Inductive Bias: On the Role of Implicit
  Regularization in Deep Learning
In Search of the Real Inductive Bias: On the Role of Implicit Regularization in Deep Learning
Behnam Neyshabur
Ryota Tomioka
Nathan Srebro
AI4CE
94
660
0
20 Dec 2014
Provable Methods for Training Neural Networks with Sparse Connectivity
Provable Methods for Training Neural Networks with Sparse Connectivity
Hanie Sedghi
Anima Anandkumar
69
64
0
08 Dec 2014
Guaranteed Non-Orthogonal Tensor Decomposition via Alternating Rank-$1$
  Updates
Guaranteed Non-Orthogonal Tensor Decomposition via Alternating Rank-111 Updates
Anima Anandkumar
Rong Ge
Majid Janzamin
127
132
0
21 Feb 2014
1