Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2006.11144
Cited By
On the Almost Sure Convergence of Stochastic Gradient Descent in Non-Convex Problems
19 June 2020
P. Mertikopoulos
Nadav Hallak
Ali Kavis
V. Cevher
Re-assign community
ArXiv
PDF
HTML
Papers citing
"On the Almost Sure Convergence of Stochastic Gradient Descent in Non-Convex Problems"
50 / 60 papers shown
Title
Spike-timing-dependent Hebbian learning as noisy gradient descent
Niklas Dexheimer
Sascha Gaudlitz
Johannes Schmidt-Hieber
30
0
0
15 May 2025
Stochastic Gradient Descent in Non-Convex Problems: Asymptotic Convergence with Relaxed Step-Size via Stopping Time Methods
Ruinan Jin
Difei Cheng
Hong Qiao
Xin Shi
Shaodong Liu
Bo Zhang
43
0
0
17 Apr 2025
A Near Complete Nonasymptotic Generalization Theory For Multilayer Neural Networks: Beyond the Bias-Variance Tradeoff
Hao Yu
Xiangyang Ji
AI4CE
60
0
0
03 Mar 2025
Nesterov acceleration in benignly non-convex landscapes
Kanan Gupta
Stephan Wojtowytsch
42
2
0
10 Oct 2024
Dynamic Decoupling of Placid Terminal Attractor-based Gradient Descent Algorithm
Jinwei Zhao
Marco Gori
Alessandro Betti
S. Melacci
Hongtao Zhang
Jiedong Liu
Xinhong Hei
33
0
0
10 Sep 2024
Lyapunov weights to convey the meaning of time in physics-informed neural networks
Gabriel Turinici
34
0
0
31 Jul 2024
Almost sure convergence rates of stochastic gradient methods under gradient domination
Simon Weissmann
Sara Klein
Waïss Azizian
Leif Döring
39
3
0
22 May 2024
Uncertainty quantification by block bootstrap for differentially private stochastic gradient descent
Holger Dette
Carina Graw
30
0
0
21 May 2024
Optimal time sampling in physics-informed neural networks
Gabriel Turinici
PINN
32
1
0
29 Apr 2024
Federated reinforcement learning for robot motion planning with zero-shot generalization
Zhenyuan Yuan
Siyuan Xu
Minghui Zhu
FedML
40
1
0
20 Mar 2024
Fed-QSSL: A Framework for Personalized Federated Learning under Bitwidth and Data Heterogeneity
Yiyue Chen
H. Vikalo
C. Wang
FedML
47
5
0
20 Dec 2023
Learning Unorthogonalized Matrices for Rotation Estimation
Kerui Gu
Zhihao Li
Shiyong Liu
Jianzhuang Liu
Songcen Xu
Youliang Yan
Michael Bi Mi
Kenji Kawaguchi
Angela Yao
32
1
0
01 Dec 2023
Adam-like Algorithm with Smooth Clipping Attains Global Minima: Analysis Based on Ergodicity of Functional SDEs
Keisuke Suzuki
29
0
0
29 Nov 2023
Riemannian stochastic optimization methods avoid strict saddle points
Ya-Ping Hsieh
Mohammad Reza Karimi
Andreas Krause
P. Mertikopoulos
35
5
0
04 Nov 2023
Tackling the Curse of Dimensionality with Physics-Informed Neural Networks
Zheyuan Hu
K. Shukla
George Karniadakis
Kenji Kawaguchi
PINN
AI4CE
65
87
0
23 Jul 2023
Convergence of stochastic gradient descent under a local Lojasiewicz condition for deep neural networks
Jing An
Jianfeng Lu
19
4
0
18 Apr 2023
High-dimensional scaling limits and fluctuations of online least-squares SGD with smooth covariance
Krishnakumar Balasubramanian
Promit Ghosal
Ye He
38
5
0
03 Apr 2023
Type-II Saddles and Probabilistic Stability of Stochastic Gradient Descent
Liu Ziyin
Botao Li
Tomer Galanti
Masakuni Ueda
37
7
0
23 Mar 2023
On the existence of optimal shallow feedforward networks with ReLU activation
Steffen Dereich
Sebastian Kassing
30
4
0
06 Mar 2023
On the existence of minimizers in shallow residual ReLU neural network optimization landscapes
Steffen Dereich
Arnulf Jentzen
Sebastian Kassing
29
6
0
28 Feb 2023
Statistical Inference for Linear Functionals of Online SGD in High-dimensional Linear Regression
Bhavya Agrawalla
Krishnakumar Balasubramanian
Promit Ghosal
25
2
0
20 Feb 2023
Almost Sure Saddle Avoidance of Stochastic Gradient Methods without the Bounded Gradient Assumption
Jun Liu
Ye Yuan
ODL
19
1
0
15 Feb 2023
FedRC: Tackling Diverse Distribution Shifts Challenge in Federated Learning by Robust Clustering
Yongxin Guo
Xiaoying Tang
Tao R. Lin
OOD
FedML
38
8
0
29 Jan 2023
Variance Reduction for Score Functions Using Optimal Baselines
Ronan L. Keane
H. Gao
18
0
0
27 Dec 2022
Efficiency Ordering of Stochastic Gradient Descent
Jie Hu
Vishwaraj Doshi
Do Young Eun
31
6
0
15 Sep 2022
Convergence of Batch Updating Methods with Approximate Gradients and/or Noisy Measurements: Theory and Computational Results
Tadipatri Uday
M. Vidyasagar
25
0
0
12 Sep 2022
Neural Tangent Kernel: A Survey
Eugene Golikov
Eduard Pokonechnyy
Vladimir Korviakov
35
13
0
29 Aug 2022
Scalable Set Encoding with Universal Mini-Batch Consistency and Unbiased Full Set Gradient Approximation
Jeffrey Willette
Seanie Lee
Bruno Andreis
Kenji Kawaguchi
Juho Lee
Sung Ju Hwang
23
3
0
26 Aug 2022
A unified stochastic approximation framework for learning in games
P. Mertikopoulos
Ya-Ping Hsieh
V. Cevher
26
18
0
08 Jun 2022
A Unified Convergence Theorem for Stochastic Optimization Methods
Xiao Li
Andre Milzarek
41
11
0
08 Jun 2022
Metrizing Fairness
Yves Rychener
Bahar Taşkesen
Daniel Kuhn
FaML
44
4
0
30 May 2022
Uniform Generalization Bound on Time and Inverse Temperature for Gradient Descent Algorithm and its Application to Analysis of Simulated Annealing
Keisuke Suzuki
AI4CE
33
0
0
25 May 2022
Weak Convergence of Approximate reflection coupling and its Application to Non-convex Optimization
Keisuke Suzuki
36
5
0
24 May 2022
A Local Convergence Theory for the Stochastic Gradient Descent Method in Non-Convex Optimization With Non-isolated Local Minima
Tae-Eon Ko
Xiantao Li
30
2
0
21 Mar 2022
Monte Carlo PINNs: deep learning approach for forward and inverse problems involving high dimensional fractional partial differential equations
Ling Guo
Hao Wu
Xiao-Jun Yu
Tao Zhou
PINN
AI4CE
29
58
0
16 Mar 2022
On Almost Sure Convergence Rates of Stochastic Gradient Methods
Jun Liu
Ye Yuan
21
36
0
09 Feb 2022
A subsampling approach for Bayesian model selection
Jon Lachmann
G. Storvik
F. Frommlet
Aliaksadr Hubin
BDL
21
2
0
31 Jan 2022
On Uniform Boundedness Properties of SGD and its Momentum Variants
Xiaoyu Wang
M. Johansson
23
3
0
25 Jan 2022
3DPG: Distributed Deep Deterministic Policy Gradient Algorithms for Networked Multi-Agent Systems
Adrian Redder
Arunselvan Ramaswamy
Holger Karl
OffRL
19
2
0
03 Jan 2022
Non-Asymptotic Analysis of Online Multiplicative Stochastic Gradient Descent
Riddhiman Bhattacharya
Tiefeng Jiang
16
0
0
14 Dec 2021
Stationary Behavior of Constant Stepsize SGD Type Algorithms: An Asymptotic Characterization
Zaiwei Chen
Shancong Mou
S. T. Maguluri
17
13
0
11 Nov 2021
Inertial Newton Algorithms Avoiding Strict Saddle Points
Camille Castera
ODL
15
2
0
08 Nov 2021
Adaptation of the Independent Metropolis-Hastings Sampler with Normalizing Flow Proposals
James A. Brofos
Marylou Gabrié
Marcus A. Brubaker
Roy R. Lederman
25
8
0
25 Oct 2021
Accelerated Almost-Sure Convergence Rates for Nonconvex Stochastic Gradient Descent using Stochastic Learning Rates
Theodoros Mamalis
D. Stipanović
R. Tao
23
2
0
25 Oct 2021
Beyond Exact Gradients: Convergence of Stochastic Soft-Max Policy Gradient Methods with Entropy Regularization
Yuhao Ding
Junzi Zhang
Hyunin Lee
Javad Lavaei
43
18
0
19 Oct 2021
Global Convergence and Stability of Stochastic Gradient Descent
V. Patel
Shushu Zhang
Bowen Tian
33
22
0
04 Oct 2021
Stochastic Subgradient Descent on a Generic Definable Function Converges to a Minimizer
S. Schechtman
30
1
0
06 Sep 2021
Convergence of gradient descent for learning linear neural networks
Gabin Maxime Nguegnang
Holger Rauhut
Ulrich Terstiege
MLT
25
17
0
04 Aug 2021
SGD with a Constant Large Learning Rate Can Converge to Local Maxima
Liu Ziyin
Botao Li
James B. Simon
Masakuni Ueda
29
8
0
25 Jul 2021
Strategic Instrumental Variable Regression: Recovering Causal Relationships From Strategic Responses
Keegan Harris
Daniel Ngo
Logan Stapleton
Hoda Heidari
Zhiwei Steven Wu
19
32
0
12 Jul 2021
1
2
Next