ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1710.08402
  4. Cited By
Stability and Generalization of Learning Algorithms that Converge to
  Global Optima

Stability and Generalization of Learning Algorithms that Converge to Global Optima

23 October 2017
Zachary B. Charles
Dimitris Papailiopoulos
    MLT
ArXivPDFHTML

Papers citing "Stability and Generalization of Learning Algorithms that Converge to Global Optima"

24 / 24 papers shown
Title
Faster WIND: Accelerating Iterative Best-of-$N$ Distillation for LLM Alignment
Faster WIND: Accelerating Iterative Best-of-NNN Distillation for LLM Alignment
Tong Yang
Jincheng Mei
H. Dai
Zixin Wen
Shicong Cen
Dale Schuurmans
Yuejie Chi
Bo Dai
74
4
0
20 Feb 2025
Learning Variational Inequalities from Data: Fast Generalization Rates under Strong Monotonicity
Learning Variational Inequalities from Data: Fast Generalization Rates under Strong Monotonicity
Eric Zhao
Tatjana Chavdarova
Michael I. Jordan
65
0
0
20 Feb 2025
Understanding Generalization of Federated Learning: the Trade-off between Model Stability and Optimization
Understanding Generalization of Federated Learning: the Trade-off between Model Stability and Optimization
Dun Zeng
Zheshun Wu
Shiyu Liu
Yu Pan
Xiaoying Tang
Zenglin Xu
MLT
FedML
111
1
0
25 Nov 2024
Rewind-to-Delete: Certified Machine Unlearning for Nonconvex Functions
Rewind-to-Delete: Certified Machine Unlearning for Nonconvex Functions
Siqiao Mu
Diego Klabjan
MU
75
3
0
15 Sep 2024
On the Convergence of the Gradient Descent Method with Stochastic Fixed-point Rounding Errors under the Polyak-Lojasiewicz Inequality
On the Convergence of the Gradient Descent Method with Stochastic Fixed-point Rounding Errors under the Polyak-Lojasiewicz Inequality
Lu Xia
M. Hochstenbach
Stefano Massei
51
2
0
23 Jan 2023
Federated Minimax Optimization: Improved Convergence Analyses and
  Algorithms
Federated Minimax Optimization: Improved Convergence Analyses and Algorithms
Pranay Sharma
Rohan Panda
Gauri Joshi
P. Varshney
FedML
64
49
0
09 Mar 2022
Stability of Stochastic Gradient Descent on Nonsmooth Convex Losses
Stability of Stochastic Gradient Descent on Nonsmooth Convex Losses
Raef Bassily
Vitaly Feldman
Cristóbal Guzmán
Kunal Talwar
MLT
41
192
0
12 Jun 2020
SGD Learns Over-parameterized Networks that Provably Generalize on
  Linearly Separable Data
SGD Learns Over-parameterized Networks that Provably Generalize on Linearly Separable Data
Alon Brutzkus
Amir Globerson
Eran Malach
Shai Shalev-Shwartz
MLT
137
277
0
27 Oct 2017
The Landscape of Deep Learning Algorithms
The Landscape of Deep Learning Algorithms
Pan Zhou
Jiashi Feng
44
24
0
19 May 2017
Data-Dependent Stability of Stochastic Gradient Descent
Data-Dependent Stability of Stochastic Gradient Descent
Ilja Kuzborskij
Christoph H. Lampert
MLT
94
165
0
05 Mar 2017
Algorithmic stability and hypothesis complexity
Algorithmic stability and hypothesis complexity
Tongliang Liu
Gábor Lugosi
Gergely Neu
Dacheng Tao
36
92
0
28 Feb 2017
Fast Rates for Empirical Risk Minimization of Strict Saddle Problems
Fast Rates for Empirical Risk Minimization of Strict Saddle Problems
Alon Gonen
Shai Shalev-Shwartz
60
30
0
16 Jan 2017
Identity Matters in Deep Learning
Identity Matters in Deep Learning
Moritz Hardt
Tengyu Ma
OOD
53
399
0
14 Nov 2016
Understanding deep learning requires rethinking generalization
Understanding deep learning requires rethinking generalization
Chiyuan Zhang
Samy Bengio
Moritz Hardt
Benjamin Recht
Oriol Vinyals
HAI
264
4,620
0
10 Nov 2016
Diverse Neural Network Learns True Target Functions
Diverse Neural Network Learns True Target Functions
Bo Xie
Yingyu Liang
Le Song
112
137
0
09 Nov 2016
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
355
2,922
0
15 Sep 2016
Why does deep and cheap learning work so well?
Why does deep and cheap learning work so well?
Henry W. Lin
Max Tegmark
David Rolnick
60
607
0
29 Aug 2016
Linear Convergence of Gradient and Proximal-Gradient Methods Under the
  Polyak-Łojasiewicz Condition
Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition
Hamed Karimi
J. Nutini
Mark Schmidt
221
1,208
0
16 Aug 2016
Generalization Properties and Implicit Regularization for Multiple
  Passes SGM
Generalization Properties and Implicit Regularization for Multiple Passes SGM
Junhong Lin
Raffaello Camoriano
Lorenzo Rosasco
51
70
0
26 May 2016
Deep Learning without Poor Local Minima
Deep Learning without Poor Local Minima
Kenji Kawaguchi
ODL
149
922
0
23 May 2016
Train faster, generalize better: Stability of stochastic gradient
  descent
Train faster, generalize better: Stability of stochastic gradient descent
Moritz Hardt
Benjamin Recht
Y. Singer
94
1,234
0
03 Sep 2015
On the Generalization Properties of Differential Privacy
Kobbi Nissim
Uri Stemmer
FedML
28
39
0
22 Apr 2015
Preserving Statistical Validity in Adaptive Data Analysis
Preserving Statistical Validity in Adaptive Data Analysis
Cynthia Dwork
Vitaly Feldman
Moritz Hardt
T. Pitassi
Omer Reingold
Aaron Roth
49
375
0
10 Nov 2014
Stochastic First- and Zeroth-order Methods for Nonconvex Stochastic
  Programming
Stochastic First- and Zeroth-order Methods for Nonconvex Stochastic Programming
Saeed Ghadimi
Guanghui Lan
ODL
71
1,538
0
22 Sep 2013
1