ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.08157
  4. Cited By
Fine-Grained Analysis of Stability and Generalization for Stochastic
  Gradient Descent

Fine-Grained Analysis of Stability and Generalization for Stochastic Gradient Descent

15 June 2020
Yunwen Lei
Yiming Ying
    MLT
ArXivPDFHTML

Papers citing "Fine-Grained Analysis of Stability and Generalization for Stochastic Gradient Descent"

36 / 36 papers shown
Title
Better Rates for Random Task Orderings in Continual Linear Models
Better Rates for Random Task Orderings in Continual Linear Models
Itay Evron
Ran Levinstein
Matan Schliserman
Uri Sherman
Tomer Koren
Daniel Soudry
Nathan Srebro
CLL
35
0
0
06 Apr 2025
Learning Variational Inequalities from Data: Fast Generalization Rates under Strong Monotonicity
Learning Variational Inequalities from Data: Fast Generalization Rates under Strong Monotonicity
Eric Zhao
Tatjana Chavdarova
Michael I. Jordan
45
0
0
20 Feb 2025
Understanding Generalization of Federated Learning: the Trade-off between Model Stability and Optimization
Understanding Generalization of Federated Learning: the Trade-off between Model Stability and Optimization
Dun Zeng
Zheshun Wu
Shiyu Liu
Yu Pan
Xiaoying Tang
Zenglin Xu
MLT
FedML
89
1
0
25 Nov 2024
Sharper Guarantees for Learning Neural Network Classifiers with Gradient
  Methods
Sharper Guarantees for Learning Neural Network Classifiers with Gradient Methods
Hossein Taheri
Christos Thrampoulidis
Arya Mazumdar
MLT
33
0
0
13 Oct 2024
A-FedPD: Aligning Dual-Drift is All Federated Primal-Dual Learning Needs
A-FedPD: Aligning Dual-Drift is All Federated Primal-Dual Learning Needs
Yan Sun
Li Shen
Dacheng Tao
FedML
25
0
0
27 Sep 2024
Convex SGD: Generalization Without Early Stopping
Convex SGD: Generalization Without Early Stopping
Julien Hendrickx
A. Olshevsky
MLT
LRM
25
1
0
08 Jan 2024
Generalization Bounds for Label Noise Stochastic Gradient Descent
Generalization Bounds for Label Noise Stochastic Gradient Descent
Jung Eun Huh
Patrick Rebeschini
13
1
0
01 Nov 2023
Demystifying the Myths and Legends of Nonconvex Convergence of SGD
Demystifying the Myths and Legends of Nonconvex Convergence of SGD
Aritra Dutta
El Houcine Bergou
Soumia Boucherouite
Nicklas Werge
M. Kandemir
Xin Li
26
0
0
19 Oct 2023
Stability and Generalization of Stochastic Compositional Gradient
  Descent Algorithms
Stability and Generalization of Stochastic Compositional Gradient Descent Algorithms
Minghao Yang
Xiyuan Wei
Tianbao Yang
Yiming Ying
37
1
0
07 Jul 2023
Generalization Guarantees of Gradient Descent for Multi-Layer Neural
  Networks
Generalization Guarantees of Gradient Descent for Multi-Layer Neural Networks
Puyu Wang
Yunwen Lei
Di Wang
Yiming Ying
Ding-Xuan Zhou
MLT
27
3
0
26 May 2023
Fast Convergence in Learning Two-Layer Neural Networks with Separable
  Data
Fast Convergence in Learning Two-Layer Neural Networks with Separable Data
Hossein Taheri
Christos Thrampoulidis
MLT
16
3
0
22 May 2023
Uniform-in-Time Wasserstein Stability Bounds for (Noisy) Stochastic
  Gradient Descent
Uniform-in-Time Wasserstein Stability Bounds for (Noisy) Stochastic Gradient Descent
Lingjiong Zhu
Mert Gurbuzbalaban
Anant Raj
Umut Simsekli
26
6
0
20 May 2023
Learning Trajectories are Generalization Indicators
Learning Trajectories are Generalization Indicators
Jingwen Fu
Zhizheng Zhang
Dacheng Yin
Yan Lu
Nanning Zheng
AI4CE
28
3
0
25 Apr 2023
Cyclic and Randomized Stepsizes Invoke Heavier Tails in SGD than
  Constant Stepsize
Cyclic and Randomized Stepsizes Invoke Heavier Tails in SGD than Constant Stepsize
Mert Gurbuzbalaban
Yuanhan Hu
Umut Simsekli
Lingjiong Zhu
LRM
20
1
0
10 Feb 2023
Algorithmic Stability of Heavy-Tailed SGD with General Loss Functions
Algorithmic Stability of Heavy-Tailed SGD with General Loss Functions
Anant Raj
Lingjiong Zhu
Mert Gurbuzbalaban
Umut Simsekli
26
15
0
27 Jan 2023
A Stability Analysis of Fine-Tuning a Pre-Trained Model
A Stability Analysis of Fine-Tuning a Pre-Trained Model
Z. Fu
Anthony Man-Cho So
Nigel Collier
23
3
0
24 Jan 2023
Sharper Analysis for Minibatch Stochastic Proximal Point Methods:
  Stability, Smoothness, and Deviation
Sharper Analysis for Minibatch Stochastic Proximal Point Methods: Stability, Smoothness, and Deviation
Xiao-Tong Yuan
P. Li
32
2
0
09 Jan 2023
On the Algorithmic Stability and Generalization of Adaptive Optimization
  Methods
On the Algorithmic Stability and Generalization of Adaptive Optimization Methods
Han Nguyen
Hai Pham
Sashank J. Reddi
Barnabás Póczos
ODL
AI4CE
17
2
0
08 Nov 2022
On Stability and Generalization of Bilevel Optimization Problem
Meng Ding
Ming Lei
Yunwen Lei
Di Wang
Jinhui Xu
32
0
0
03 Oct 2022
Exploring the Algorithm-Dependent Generalization of AUPRC Optimization
  with List Stability
Exploring the Algorithm-Dependent Generalization of AUPRC Optimization with List Stability
Peisong Wen
Qianqian Xu
Zhiyong Yang
Yuan He
Qingming Huang
53
10
0
27 Sep 2022
Stability and Generalization for Markov Chain Stochastic Gradient
  Methods
Stability and Generalization for Markov Chain Stochastic Gradient Methods
Puyu Wang
Yunwen Lei
Yiming Ying
Ding-Xuan Zhou
16
18
0
16 Sep 2022
On Generalization of Decentralized Learning with Separable Data
On Generalization of Decentralized Learning with Separable Data
Hossein Taheri
Christos Thrampoulidis
FedML
27
10
0
15 Sep 2022
Differentially Private Stochastic Gradient Descent with Low-Noise
Differentially Private Stochastic Gradient Descent with Low-Noise
Puyu Wang
Yunwen Lei
Yiming Ying
Ding-Xuan Zhou
FedML
43
5
0
09 Sep 2022
Uniform Stability for First-Order Empirical Risk Minimization
Uniform Stability for First-Order Empirical Risk Minimization
Amit Attia
Tomer Koren
18
5
0
17 Jul 2022
Beyond Lipschitz: Sharp Generalization and Excess Risk Bounds for
  Full-Batch GD
Beyond Lipschitz: Sharp Generalization and Excess Risk Bounds for Full-Batch GD
Konstantinos E. Nikolakakis
Farzin Haddadpour
Amin Karbasi
Dionysios S. Kalogerias
40
17
0
26 Apr 2022
Sharper Utility Bounds for Differentially Private Models
Sharper Utility Bounds for Differentially Private Models
Yilin Kang
Yong Liu
Jian Li
Weiping Wang
FedML
29
3
0
22 Apr 2022
Stability vs Implicit Bias of Gradient Methods on Separable Data and
  Beyond
Stability vs Implicit Bias of Gradient Methods on Separable Data and Beyond
Matan Schliserman
Tomer Koren
24
23
0
27 Feb 2022
Differentially Private SGDA for Minimax Problems
Differentially Private SGDA for Minimax Problems
Zhenhuan Yang
Shu Hu
Yunwen Lei
Kush R. Varshney
Siwei Lyu
Yiming Ying
36
19
0
22 Jan 2022
Stability Based Generalization Bounds for Exponential Family Langevin
  Dynamics
Stability Based Generalization Bounds for Exponential Family Langevin Dynamics
A. Banerjee
Tiancong Chen
Xinyan Li
Yingxue Zhou
31
8
0
09 Jan 2022
On the Generalization of Models Trained with SGD: Information-Theoretic
  Bounds and Implications
On the Generalization of Models Trained with SGD: Information-Theoretic Bounds and Implications
Ziqiao Wang
Yongyi Mao
FedML
MLT
37
22
0
07 Oct 2021
Stability and Generalization for Randomized Coordinate Descent
Stability and Generalization for Randomized Coordinate Descent
Puyu Wang
Liang Wu
Yunwen Lei
18
7
0
17 Aug 2021
Improved Learning Rates for Stochastic Optimization: Two Theoretical
  Viewpoints
Improved Learning Rates for Stochastic Optimization: Two Theoretical Viewpoints
Shaojie Li
Yong Liu
20
13
0
19 Jul 2021
Stability of SGD: Tightness Analysis and Improved Bounds
Stability of SGD: Tightness Analysis and Improved Bounds
Yikai Zhang
Wenjia Zhang
Sammy Bald
Vamsi Pingali
Chao Chen
Mayank Goswami
MLT
19
36
0
10 Feb 2021
Stability of Stochastic Gradient Descent on Nonsmooth Convex Losses
Stability of Stochastic Gradient Descent on Nonsmooth Convex Losses
Raef Bassily
Vitaly Feldman
Cristóbal Guzmán
Kunal Talwar
MLT
8
192
0
12 Jun 2020
A simpler approach to obtaining an O(1/t) convergence rate for the
  projected stochastic subgradient method
A simpler approach to obtaining an O(1/t) convergence rate for the projected stochastic subgradient method
Simon Lacoste-Julien
Mark W. Schmidt
Francis R. Bach
124
259
0
10 Dec 2012
Stochastic Gradient Descent for Non-smooth Optimization: Convergence
  Results and Optimal Averaging Schemes
Stochastic Gradient Descent for Non-smooth Optimization: Convergence Results and Optimal Averaging Schemes
Ohad Shamir
Tong Zhang
101
570
0
08 Dec 2012
1