ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.01687
  4. Cited By
From Stability to Chaos: Analyzing Gradient Descent Dynamics in
  Quadratic Regression

From Stability to Chaos: Analyzing Gradient Descent Dynamics in Quadratic Regression

2 October 2023
Xuxing Chen
Krishnakumar Balasubramanian
Promit Ghosal
Bhavya Agrawalla
ArXivPDFHTML

Papers citing "From Stability to Chaos: Analyzing Gradient Descent Dynamics in Quadratic Regression"

14 / 14 papers shown
Title
Minimax Optimal Convergence of Gradient Descent in Logistic Regression via Large and Adaptive Stepsizes
Minimax Optimal Convergence of Gradient Descent in Logistic Regression via Large and Adaptive Stepsizes
Ruiqi Zhang
Jingfeng Wu
Licong Lin
Peter L. Bartlett
28
0
0
05 Apr 2025
Learning a Single Index Model from Anisotropic Data with vanilla Stochastic Gradient Descent
Learning a Single Index Model from Anisotropic Data with vanilla Stochastic Gradient Descent
Guillaume Braun
Minh Ha Quang
Masaaki Imaizumi
MLT
42
0
0
31 Mar 2025
Universal Sharpness Dynamics in Neural Network Training: Fixed Point Analysis, Edge of Stability, and Route to Chaos
Universal Sharpness Dynamics in Neural Network Training: Fixed Point Analysis, Edge of Stability, and Route to Chaos
Dayal Singh Kalra
Tianyu He
M. Barkeshli
49
4
0
17 Feb 2025
Tracking solutions of time-varying variational inequalities
Tracking solutions of time-varying variational inequalities
Hédi Hadiji
Sarah Sachs
Cristóbal Guzmán
53
1
0
20 Jun 2024
Complex fractal trainability boundary can arise from trivial
  non-convexity
Complex fractal trainability boundary can arise from trivial non-convexity
Yizhou Liu
21
1
0
20 Jun 2024
Gradient Descent on Logistic Regression with Non-Separable Data and
  Large Step Sizes
Gradient Descent on Logistic Regression with Non-Separable Data and Large Step Sizes
Si Yi Meng
Antonio Orvieto
Daniel Yiming Cao
Christopher De Sa
30
1
0
07 Jun 2024
The boundary of neural network trainability is fractal
The boundary of neural network trainability is fractal
Jascha Narain Sohl-Dickstein
28
8
0
09 Feb 2024
Understanding Edge-of-Stability Training Dynamics with a Minimalist
  Example
Understanding Edge-of-Stability Training Dynamics with a Minimalist Example
Xingyu Zhu
Zixuan Wang
Xiang Wang
Mo Zhou
Rong Ge
66
35
0
07 Oct 2022
Training Scale-Invariant Neural Networks on the Sphere Can Happen in
  Three Regimes
Training Scale-Invariant Neural Networks on the Sphere Can Happen in Three Regimes
M. Kodryan
E. Lobacheva
M. Nakhodnov
Dmitry Vetrov
39
15
0
08 Sep 2022
Chaotic Regularization and Heavy-Tailed Limits for Deterministic
  Gradient Descent
Chaotic Regularization and Heavy-Tailed Limits for Deterministic Gradient Descent
S. H. Lim
Yijun Wan
Umut cSimcsekli
34
12
0
23 May 2022
Understanding Gradient Descent on Edge of Stability in Deep Learning
Understanding Gradient Descent on Edge of Stability in Deep Learning
Sanjeev Arora
Zhiyuan Li
A. Panigrahi
MLT
80
89
0
19 May 2022
Neural Network Weights Do Not Converge to Stationary Points: An
  Invariant Measure Perspective
Neural Network Weights Do Not Converge to Stationary Points: An Invariant Measure Perspective
Junzhe Zhang
Haochuan Li
S. Sra
Ali Jadbabaie
66
9
0
12 Oct 2021
Large Learning Rate Tames Homogeneity: Convergence and Balancing Effect
Large Learning Rate Tames Homogeneity: Convergence and Balancing Effect
Yuqing Wang
Minshuo Chen
T. Zhao
Molei Tao
AI4CE
57
40
0
07 Oct 2021
The large learning rate phase of deep learning: the catapult mechanism
The large learning rate phase of deep learning: the catapult mechanism
Aitor Lewkowycz
Yasaman Bahri
Ethan Dyer
Jascha Narain Sohl-Dickstein
Guy Gur-Ari
ODL
159
234
0
04 Mar 2020
1