ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2003.10409
  4. Cited By
Online stochastic gradient descent on non-convex losses from
  high-dimensional inference

Online stochastic gradient descent on non-convex losses from high-dimensional inference

23 March 2020
Gerard Ben Arous
Reza Gheissari
Aukosh Jagannath
ArXivPDFHTML

Papers citing "Online stochastic gradient descent on non-convex losses from high-dimensional inference"

27 / 27 papers shown
Title
Online Learning of Neural Networks
Online Learning of Neural Networks
Amit Daniely
Idan Mehalel
Elchanan Mossel
MLT
29
3
0
14 May 2025
Statistically guided deep learning
Statistically guided deep learning
Michael Kohler
A. Krzyżak
ODL
BDL
82
0
0
11 Apr 2025
Learning a Single Index Model from Anisotropic Data with vanilla Stochastic Gradient Descent
Learning a Single Index Model from Anisotropic Data with vanilla Stochastic Gradient Descent
Guillaume Braun
Minh Ha Quang
Masaaki Imaizumi
MLT
42
0
0
31 Mar 2025
A distributional simplicity bias in the learning dynamics of transformers
A distributional simplicity bias in the learning dynamics of transformers
Riccardo Rende
Federica Gerace
Alessandro Laio
Sebastian Goldt
79
8
0
17 Feb 2025
Low-dimensional Functions are Efficiently Learnable under Randomly Biased Distributions
Elisabetta Cornacchia
Dan Mikulincer
Elchanan Mossel
79
1
0
10 Feb 2025
Learning Gaussian Multi-Index Models with Gradient Flow: Time Complexity and Directional Convergence
Learning Gaussian Multi-Index Models with Gradient Flow: Time Complexity and Directional Convergence
Berfin Simsek
Amire Bendjeddou
Daniel Hsu
49
0
0
13 Nov 2024
Robust Feature Learning for Multi-Index Models in High Dimensions
Robust Feature Learning for Multi-Index Models in High Dimensions
Alireza Mousavi-Hosseini
Adel Javanmard
Murat A. Erdogdu
OOD
AAML
48
1
0
21 Oct 2024
Learning Multi-Index Models with Neural Networks via Mean-Field Langevin Dynamics
Learning Multi-Index Models with Neural Networks via Mean-Field Langevin Dynamics
Alireza Mousavi-Hosseini
Denny Wu
Murat A. Erdogdu
MLT
AI4CE
37
6
0
14 Aug 2024
Repetita Iuvant: Data Repetition Allows SGD to Learn High-Dimensional Multi-Index Functions
Repetita Iuvant: Data Repetition Allows SGD to Learn High-Dimensional Multi-Index Functions
Luca Arnaboldi
Yatin Dandi
Florent Krzakala
Luca Pesce
Ludovic Stephan
73
12
0
24 May 2024
Grokking as a First Order Phase Transition in Two Layer Networks
Grokking as a First Order Phase Transition in Two Layer Networks
Noa Rubin
Inbar Seroussi
Zohar Ringel
37
16
0
05 Oct 2023
Symmetric Single Index Learning
Symmetric Single Index Learning
Aaron Zweig
Joan Bruna
MLT
36
2
0
03 Oct 2023
Gradient-Based Feature Learning under Structured Data
Gradient-Based Feature Learning under Structured Data
Alireza Mousavi-Hosseini
Denny Wu
Taiji Suzuki
Murat A. Erdogdu
MLT
39
18
0
07 Sep 2023
Pareto Frontiers in Neural Feature Learning: Data, Compute, Width, and
  Luck
Pareto Frontiers in Neural Feature Learning: Data, Compute, Width, and Luck
Benjamin L. Edelman
Surbhi Goel
Sham Kakade
Eran Malach
Cyril Zhang
50
8
0
07 Sep 2023
On Single Index Models beyond Gaussian Data
On Single Index Models beyond Gaussian Data
Joan Bruna
Loucas Pillaud-Vivien
Aaron Zweig
20
10
0
28 Jul 2023
Smoothing the Landscape Boosts the Signal for SGD: Optimal Sample
  Complexity for Learning Single Index Models
Smoothing the Landscape Boosts the Signal for SGD: Optimal Sample Complexity for Learning Single Index Models
Alexandru Damian
Eshaan Nichani
Rong Ge
Jason D. Lee
MLT
44
33
0
18 May 2023
Provable Guarantees for Nonlinear Feature Learning in Three-Layer Neural Networks
Provable Guarantees for Nonlinear Feature Learning in Three-Layer Neural Networks
Eshaan Nichani
Alexandru Damian
Jason D. Lee
MLT
47
13
0
11 May 2023
High-dimensional limit of one-pass SGD on least squares
High-dimensional limit of one-pass SGD on least squares
Elizabeth Collins-Woodfin
Elliot Paquette
36
3
0
13 Apr 2023
High-dimensional scaling limits and fluctuations of online least-squares
  SGD with smooth covariance
High-dimensional scaling limits and fluctuations of online least-squares SGD with smooth covariance
Krishnakumar Balasubramanian
Promit Ghosal
Ye He
40
5
0
03 Apr 2023
Learning time-scales in two-layers neural networks
Learning time-scales in two-layers neural networks
Raphael Berthier
Andrea Montanari
Kangjie Zhou
41
33
0
28 Feb 2023
Statistical Inference for Linear Functionals of Online SGD in High-dimensional Linear Regression
Statistical Inference for Linear Functionals of Online SGD in High-dimensional Linear Regression
Bhavya Agrawalla
Krishnakumar Balasubramanian
Promit Ghosal
27
2
0
20 Feb 2023
From high-dimensional & mean-field dynamics to dimensionless ODEs: A
  unifying approach to SGD in two-layers networks
From high-dimensional & mean-field dynamics to dimensionless ODEs: A unifying approach to SGD in two-layers networks
Luca Arnaboldi
Ludovic Stephan
Florent Krzakala
Bruno Loureiro
MLT
38
31
0
12 Feb 2023
Learning Single-Index Models with Shallow Neural Networks
Learning Single-Index Models with Shallow Neural Networks
A. Bietti
Joan Bruna
Clayton Sanford
M. Song
170
68
0
27 Oct 2022
Rigorous dynamical mean field theory for stochastic gradient descent
  methods
Rigorous dynamical mean field theory for stochastic gradient descent methods
Cédric Gerbelot
Emanuele Troiani
Francesca Mignacco
Florent Krzakala
Lenka Zdeborova
38
26
0
12 Oct 2022
Neural Networks Efficiently Learn Low-Dimensional Representations with
  SGD
Neural Networks Efficiently Learn Low-Dimensional Representations with SGD
Alireza Mousavi-Hosseini
Sejun Park
M. Girotti
Ioannis Mitliagkas
Murat A. Erdogdu
MLT
324
48
0
29 Sep 2022
Hidden Progress in Deep Learning: SGD Learns Parities Near the
  Computational Limit
Hidden Progress in Deep Learning: SGD Learns Parities Near the Computational Limit
Boaz Barak
Benjamin L. Edelman
Surbhi Goel
Sham Kakade
Eran Malach
Cyril Zhang
41
124
0
18 Jul 2022
High-dimensional limit theorems for SGD: Effective dynamics and critical
  scaling
High-dimensional limit theorems for SGD: Effective dynamics and critical scaling
Gerard Ben Arous
Reza Gheissari
Aukosh Jagannath
62
58
0
08 Jun 2022
Phase diagram of Stochastic Gradient Descent in high-dimensional
  two-layer neural networks
Phase diagram of Stochastic Gradient Descent in high-dimensional two-layer neural networks
R. Veiga
Ludovic Stephan
Bruno Loureiro
Florent Krzakala
Lenka Zdeborová
MLT
32
31
0
01 Feb 2022
1