Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2006.07904
Cited By
An Analysis of Constant Step Size SGD in the Non-convex Regime: Asymptotic Normality and Bias
14 June 2020
Lu Yu
Krishnakumar Balasubramanian
S. Volgushev
Murat A. Erdogdu
Re-assign community
ArXiv
PDF
HTML
Papers citing
"An Analysis of Constant Step Size SGD in the Non-convex Regime: Asymptotic Normality and Bias"
43 / 43 papers shown
Title
A Piecewise Lyapunov Analysis of Sub-quadratic SGD: Applications to Robust and Quantile Regression
Yixuan Zhang
Dongyan
Yudong Chen
Qiaomin Xie
24
0
0
11 Apr 2025
Online Inference for Quantiles by Constant Learning-Rate Stochastic Gradient Descent
Ziyang Wei
Jiaqi Li
Likai Chen
W. Wu
46
0
0
04 Mar 2025
Coupling-based Convergence Diagnostic and Stepsize Scheme for Stochastic Gradient Descent
Xiang Li
Qiaomin Xie
76
0
0
15 Dec 2024
Two-Timescale Linear Stochastic Approximation: Constant Stepsizes Go a Long Way
Jeongyeol Kwon
Luke Dotson
Yudong Chen
Qiaomin Xie
28
1
0
16 Oct 2024
Nonasymptotic Analysis of Stochastic Gradient Descent with the Richardson-Romberg Extrapolation
Marina Sheshukova
Denis Belomestny
Alain Durmus
Eric Moulines
Alexey Naumov
S. Samsonov
33
1
0
07 Oct 2024
Enhancing Stochastic Optimization for Statistical Efficiency Using ROOT-SGD with Diminishing Stepsize
Tong Zhang
Chris Junchi Li
36
0
0
15 Jul 2024
Computing the Bias of Constant-step Stochastic Approximation with Markovian Noise
Sebastian Allmeier
Nicolas Gast
36
5
0
23 May 2024
Uncertainty quantification by block bootstrap for differentially private stochastic gradient descent
Holger Dette
Carina Graw
20
0
0
21 May 2024
Prelimit Coupling and Steady-State Convergence of Constant-stepsize Nonsmooth Contractive SA
Yixuan Zhang
D. Huo
Yudong Chen
Qiaomin Xie
29
2
0
09 Apr 2024
A Selective Review on Statistical Methods for Massive Data Computation: Distributed Computing, Subsampling, and Minibatch Techniques
Xuetong Li
Yuan Gao
Hong Chang
Danyang Huang
Yingying Ma
...
Ke Xu
Jing Zhou
Xuening Zhu
Yingqiu Zhu
Hansheng Wang
38
7
0
17 Mar 2024
Constant Stepsize Q-learning: Distributional Convergence, Bias and Extrapolation
Yixuan Zhang
Qiaomin Xie
27
4
0
25 Jan 2024
Effectiveness of Constant Stepsize in Markovian LSA and Statistical Inference
D. Huo
Yudong Chen
Qiaomin Xie
32
4
0
18 Dec 2023
Demystifying the Myths and Legends of Nonconvex Convergence of SGD
Aritra Dutta
El Houcine Bergou
Soumia Boucherouite
Nicklas Werge
M. Kandemir
Xin Li
26
0
0
19 Oct 2023
Robust Stochastic Optimization via Gradient Quantile Clipping
Ibrahim Merad
Stéphane Gaïffas
16
1
0
29 Sep 2023
The Effect of SGD Batch Size on Autoencoder Learning: Sparsity, Sharpness, and Feature Learning
Nikhil Ghosh
Spencer Frei
Wooseok Ha
Ting Yu
MLT
30
3
0
06 Aug 2023
Online covariance estimation for stochastic gradient descent under Markovian sampling
Abhishek Roy
Krishnakumar Balasubramanian
24
5
0
03 Aug 2023
Weighted Averaged Stochastic Gradient Descent: Asymptotic Normality and Optimality
Ziyang Wei
Wanrong Zhu
W. Wu
21
3
0
13 Jul 2023
Stochastic Methods in Variational Inequalities: Ergodicity, Bias and Refinements
Emmanouil-Vasileios Vlatakis-Gkaragkounis
Angeliki Giannou
Yudong Chen
Qiaomin Xie
24
4
0
28 Jun 2023
Convergence and concentration properties of constant step-size SGD through Markov chains
Ibrahim Merad
Stéphane Gaïffas
38
5
0
20 Jun 2023
Statistical Analysis of Fixed Mini-Batch Gradient Descent Estimator
Haobo Qi
Feifei Wang
Hansheng Wang
17
13
0
13 Apr 2023
High-dimensional scaling limits and fluctuations of online least-squares SGD with smooth covariance
Krishnakumar Balasubramanian
Promit Ghosal
Ye He
28
5
0
03 Apr 2023
Towards a Complete Analysis of Langevin Monte Carlo: Beyond Poincaré Inequality
Alireza Mousavi-Hosseini
Tyler Farghly
Ye He
Krishnakumar Balasubramanian
Murat A. Erdogdu
45
25
0
07 Mar 2023
Statistical Inference for Linear Functionals of Online SGD in High-dimensional Linear Regression
Bhavya Agrawalla
Krishnakumar Balasubramanian
Promit Ghosal
23
2
0
20 Feb 2023
Why is parameter averaging beneficial in SGD? An objective smoothing perspective
Atsushi Nitanda
Ryuhei Kikuchi
Shugo Maeda
Denny Wu
FedML
18
0
0
18 Feb 2023
Bias and Extrapolation in Markovian Linear Stochastic Approximation with Constant Stepsizes
D. Huo
Yudong Chen
Qiaomin Xie
18
17
0
03 Oct 2022
Neural Networks Efficiently Learn Low-Dimensional Representations with SGD
Alireza Mousavi-Hosseini
Sejun Park
M. Girotti
Ioannis Mitliagkas
Murat A. Erdogdu
MLT
319
48
0
29 Sep 2022
Two-Tailed Averaging: Anytime, Adaptive, Once-in-a-While Optimal Weight Averaging for Better Generalization
Gábor Melis
MoMe
21
1
0
26 Sep 2022
Generalization Bounds for Stochastic Gradient Descent via Localized
ε
\varepsilon
ε
-Covers
Sejun Park
Umut Simsekli
Murat A. Erdogdu
43
9
0
19 Sep 2022
On Uniform Boundedness Properties of SGD and its Momentum Variants
Xiaoyu Wang
M. Johansson
18
3
0
25 Jan 2022
Non-Asymptotic Analysis of Online Multiplicative Stochastic Gradient Descent
Riddhiman Bhattacharya
Tiefeng Jiang
8
0
0
14 Dec 2021
Stochastic Gradient Line Bayesian Optimization for Efficient Noise-Robust Optimization of Parameterized Quantum Circuits
Shiro Tamiya
H. Yamasaki
15
24
0
15 Nov 2021
Stationary Behavior of Constant Stepsize SGD Type Algorithms: An Asymptotic Characterization
Zaiwei Chen
Shancong Mou
S. T. Maguluri
17
13
0
11 Nov 2021
Bootstrapping the error of Oja's algorithm
Robert Lunde
Purnamrita Sarkar
Rachel A. Ward
11
10
0
28 Jun 2021
Fractal Structure and Generalization Properties of Stochastic Optimization Algorithms
A. Camuto
George Deligiannidis
Murat A. Erdogdu
Mert Gurbuzbalaban
Umut cSimcsekli
Lingjiong Zhu
25
29
0
09 Jun 2021
Learning Curves for SGD on Structured Features
Blake Bordelon
C. Pehlevan
MLT
17
0
0
04 Jun 2021
Manipulating SGD with Data Ordering Attacks
Ilia Shumailov
Zakhar Shumaylov
Dmitry Kazhdan
Yiren Zhao
Nicolas Papernot
Murat A. Erdogdu
Ross J. Anderson
AAML
112
90
0
19 Apr 2021
Convergence Rates of Stochastic Gradient Descent under Infinite Noise Variance
Hongjian Wang
Mert Gurbuzbalaban
Lingjiong Zhu
Umut cSimcsekli
Murat A. Erdogdu
13
41
0
20 Feb 2021
Statistical Inference for Polyak-Ruppert Averaged Zeroth-order Stochastic Gradient Algorithm
Yanhao Jin
Tesi Xiao
Krishnakumar Balasubramanian
15
5
0
10 Feb 2021
Stochastic Multi-level Composition Optimization Algorithms with Level-Independent Convergence Rates
Krishnakumar Balasubramanian
Saeed Ghadimi
A. Nguyen
16
33
0
24 Aug 2020
Convergence of Langevin Monte Carlo in Chi-Squared and Renyi Divergence
Murat A. Erdogdu
Rasa Hosseinzadeh
Matthew Shunshi Zhang
86
41
0
22 Jul 2020
On the Convergence of Langevin Monte Carlo: The Interplay between Tail Growth and Smoothness
Murat A. Erdogdu
Rasa Hosseinzadeh
6
74
0
27 May 2020
Error Lower Bounds of Constant Step-size Stochastic Gradient Descent
Zhiyan Ding
Yiding Chen
Qin Li
Xiaojin Zhu
17
4
0
18 Oct 2019
Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition
Hamed Karimi
J. Nutini
Mark W. Schmidt
133
1,198
0
16 Aug 2016
1