ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1910.12837
  4. Cited By
Online Stochastic Gradient Descent with Arbitrary Initialization Solves
  Non-smooth, Non-convex Phase Retrieval

Online Stochastic Gradient Descent with Arbitrary Initialization Solves Non-smooth, Non-convex Phase Retrieval

28 October 2019
Yan Shuo Tan
Roman Vershynin
ArXivPDFHTML

Papers citing "Online Stochastic Gradient Descent with Arbitrary Initialization Solves Non-smooth, Non-convex Phase Retrieval"

10 / 10 papers shown
Title
Grokking as a First Order Phase Transition in Two Layer Networks
Grokking as a First Order Phase Transition in Two Layer Networks
Noa Rubin
Inbar Seroussi
Z. Ringel
37
15
0
05 Oct 2023
Smoothing the Landscape Boosts the Signal for SGD: Optimal Sample
  Complexity for Learning Single Index Models
Smoothing the Landscape Boosts the Signal for SGD: Optimal Sample Complexity for Learning Single Index Models
Alexandru Damian
Eshaan Nichani
Rong Ge
Jason D. Lee
MLT
39
33
0
18 May 2023
High-dimensional scaling limits and fluctuations of online least-squares
  SGD with smooth covariance
High-dimensional scaling limits and fluctuations of online least-squares SGD with smooth covariance
Krishnakumar Balasubramanian
Promit Ghosal
Ye He
30
5
0
03 Apr 2023
Statistical Inference for Linear Functionals of Online SGD in High-dimensional Linear Regression
Statistical Inference for Linear Functionals of Online SGD in High-dimensional Linear Regression
Bhavya Agrawalla
Krishnakumar Balasubramanian
Promit Ghosal
25
2
0
20 Feb 2023
From high-dimensional & mean-field dynamics to dimensionless ODEs: A
  unifying approach to SGD in two-layers networks
From high-dimensional & mean-field dynamics to dimensionless ODEs: A unifying approach to SGD in two-layers networks
Luca Arnaboldi
Ludovic Stephan
Florent Krzakala
Bruno Loureiro
MLT
30
31
0
12 Feb 2023
From Gradient Flow on Population Loss to Learning with Stochastic
  Gradient Descent
From Gradient Flow on Population Loss to Learning with Stochastic Gradient Descent
Satyen Kale
Jason D. Lee
Chris De Sa
Ayush Sekhari
Karthik Sridharan
24
4
0
13 Oct 2022
Phase diagram of Stochastic Gradient Descent in high-dimensional
  two-layer neural networks
Phase diagram of Stochastic Gradient Descent in high-dimensional two-layer neural networks
R. Veiga
Ludovic Stephan
Bruno Loureiro
Florent Krzakala
Lenka Zdeborová
MLT
10
31
0
01 Feb 2022
On Uniform Boundedness Properties of SGD and its Momentum Variants
On Uniform Boundedness Properties of SGD and its Momentum Variants
Xiaoyu Wang
M. Johansson
23
3
0
25 Jan 2022
SGD for Structured Nonconvex Functions: Learning Rates, Minibatching and
  Interpolation
SGD for Structured Nonconvex Functions: Learning Rates, Minibatching and Interpolation
Robert Mansel Gower
Othmane Sebbouh
Nicolas Loizou
25
74
0
18 Jun 2020
An Analysis of Constant Step Size SGD in the Non-convex Regime:
  Asymptotic Normality and Bias
An Analysis of Constant Step Size SGD in the Non-convex Regime: Asymptotic Normality and Bias
Lu Yu
Krishnakumar Balasubramanian
S. Volgushev
Murat A. Erdogdu
35
50
0
14 Jun 2020
1