ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2301.03125
  4. Cited By
Sharper Analysis for Minibatch Stochastic Proximal Point Methods:
  Stability, Smoothness, and Deviation

Sharper Analysis for Minibatch Stochastic Proximal Point Methods: Stability, Smoothness, and Deviation

9 January 2023
Xiao-Tong Yuan
P. Li
ArXivPDFHTML

Papers citing "Sharper Analysis for Minibatch Stochastic Proximal Point Methods: Stability, Smoothness, and Deviation"

40 / 40 papers shown
Title
Stability and Risk Bounds of Iterative Hard Thresholding
Stability and Risk Bounds of Iterative Hard Thresholding
Xiao-Tong Yuan
P. Li
49
12
0
17 Mar 2022
Minibatch and Momentum Model-based Methods for Stochastic Weakly Convex
  Optimization
Minibatch and Momentum Model-based Methods for Stochastic Weakly Convex Optimization
Qi Deng
Wenzhi Gao
47
14
0
06 Jun 2021
An Even More Optimal Stochastic Optimization Algorithm: Minibatching and
  Interpolation Learning
An Even More Optimal Stochastic Optimization Algorithm: Minibatching and Interpolation Learning
Blake E. Woodworth
Nathan Srebro
42
22
0
04 Jun 2021
Practical Schemes for Finding Near-Stationary Points of Convex
  Finite-Sums
Practical Schemes for Finding Near-Stationary Points of Convex Finite-Sums
Kaiwen Zhou
Lai Tian
Anthony Man-Cho So
James Cheng
53
10
0
25 May 2021
Stability and Deviation Optimal Risk Bounds with Convergence Rate
  $O(1/n)$
Stability and Deviation Optimal Risk Bounds with Convergence Rate O(1/n)O(1/n)O(1/n)
Yegor Klochkov
Nikita Zhivotovskiy
58
62
0
22 Mar 2021
Accelerated, Optimal, and Parallel: Some Results on Model-Based
  Stochastic Optimization
Accelerated, Optimal, and Parallel: Some Results on Model-Based Stochastic Optimization
Karan N. Chadha
Gary Cheng
John C. Duchi
74
16
0
07 Jan 2021
Fine-Grained Analysis of Stability and Generalization for Stochastic
  Gradient Descent
Fine-Grained Analysis of Stability and Generalization for Stochastic Gradient Descent
Yunwen Lei
Yiming Ying
MLT
65
127
0
15 Jun 2020
Stability of Stochastic Gradient Descent on Nonsmooth Convex Losses
Stability of Stochastic Gradient Descent on Nonsmooth Convex Losses
Raef Bassily
Vitaly Feldman
Cristóbal Guzmán
Kunal Talwar
MLT
47
194
0
12 Jun 2020
Sharper bounds for uniformly stable algorithms
Sharper bounds for uniformly stable algorithms
Olivier Bousquet
Yegor Klochkov
Nikita Zhivotovskiy
48
120
0
17 Oct 2019
On Convergence of Distributed Approximate Newton Methods: Globalization,
  Sharper Bounds and Beyond
On Convergence of Distributed Approximate Newton Methods: Globalization, Sharper Bounds and Beyond
Xiao-Tong Yuan
Ping Li
112
32
0
06 Aug 2019
A Generic Acceleration Framework for Stochastic Composite Optimization
A Generic Acceleration Framework for Stochastic Composite Optimization
A. Kulunchakov
Julien Mairal
53
43
0
03 Jun 2019
Large Batch Optimization for Deep Learning: Training BERT in 76 minutes
Large Batch Optimization for Deep Learning: Training BERT in 76 minutes
Yang You
Jing Li
Sashank J. Reddi
Jonathan Hseu
Sanjiv Kumar
Srinadh Bhojanapalli
Xiaodan Song
J. Demmel
Kurt Keutzer
Cho-Jui Hsieh
ODL
214
993
0
01 Apr 2019
The importance of better models in stochastic optimization
The importance of better models in stochastic optimization
Hilal Asi
John C. Duchi
38
73
0
20 Mar 2019
High probability generalization bounds for uniformly stable algorithms
  with nearly optimal rate
High probability generalization bounds for uniformly stable algorithms with nearly optimal rate
Vitaly Feldman
J. Vondrák
58
154
0
27 Feb 2019
Generalization Bounds for Uniformly Stable Algorithms
Generalization Bounds for Uniformly Stable Algorithms
Vitaly Feldman
J. Vondrák
49
89
0
24 Dec 2018
Stochastic (Approximate) Proximal Point Methods: Convergence,
  Optimality, and Adaptivity
Stochastic (Approximate) Proximal Point Methods: Convergence, Optimality, and Adaptivity
Hilal Asi
John C. Duchi
120
124
0
12 Oct 2018
Stochastic model-based minimization of weakly convex functions
Stochastic model-based minimization of weakly convex functions
Damek Davis
Dmitriy Drusvyatskiy
72
376
0
17 Mar 2018
Stochastic Methods for Composite and Weakly Convex Optimization Problems
Stochastic Methods for Composite and Weakly Convex Optimization Problems
John C. Duchi
Feng Ruan
32
127
0
24 Mar 2017
Memory and Communication Efficient Distributed Stochastic Optimization
  with Minibatch-Prox
Memory and Communication Efficient Distributed Stochastic Optimization with Minibatch-Prox
Jialei Wang
Weiran Wang
Nathan Srebro
82
54
0
21 Feb 2017
Empirical Risk Minimization for Stochastic Convex Optimization:
  $O(1/n)$- and $O(1/n^2)$-type of Risk Bounds
Empirical Risk Minimization for Stochastic Convex Optimization: O(1/n)O(1/n)O(1/n)- and O(1/n2)O(1/n^2)O(1/n2)-type of Risk Bounds
Lijun Zhang
Tianbao Yang
Rong Jin
40
48
0
07 Feb 2017
Linear Convergence of Gradient and Proximal-Gradient Methods Under the
  Polyak-Łojasiewicz Condition
Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition
Hamed Karimi
J. Nutini
Mark Schmidt
254
1,216
0
16 Aug 2016
The Landscape of Empirical Risk for Non-convex Losses
The Landscape of Empirical Risk for Non-convex Losses
Song Mei
Yu Bai
Andrea Montanari
96
312
0
22 Jul 2016
Optimization Methods for Large-Scale Machine Learning
Optimization Methods for Large-Scale Machine Learning
Léon Bottou
Frank E. Curtis
J. Nocedal
221
3,205
0
15 Jun 2016
Efficient Distributed Learning with Sparsity
Efficient Distributed Learning with Sparsity
Jialei Wang
Mladen Kolar
Nathan Srebro
Tong Zhang
FedML
59
152
0
25 May 2016
Katyusha: The First Direct Acceleration of Stochastic Gradient Methods
Katyusha: The First Direct Acceleration of Stochastic Gradient Methods
Zeyuan Allen-Zhu
ODL
96
580
0
18 Mar 2016
Harder, Better, Faster, Stronger Convergence Rates for Least-Squares
  Regression
Harder, Better, Faster, Stronger Convergence Rates for Least-Squares Regression
Aymeric Dieuleveut
Nicolas Flammarion
Francis R. Bach
ODL
51
226
0
17 Feb 2016
Train faster, generalize better: Stability of stochastic gradient
  descent
Train faster, generalize better: Stability of stochastic gradient descent
Moritz Hardt
Benjamin Recht
Y. Singer
111
1,238
0
03 Sep 2015
Towards stability and optimality in stochastic gradient descent
Towards stability and optimality in stochastic gradient descent
Panos Toulis
Dustin Tran
E. Airoldi
67
56
0
10 May 2015
Competing with the Empirical Risk Minimizer in a Single Pass
Competing with the Empirical Risk Minimizer in a Single Pass
Roy Frostig
Rong Ge
Sham Kakade
Aaron Sidford
69
100
0
20 Dec 2014
Communication-Efficient Distributed Dual Coordinate Ascent
Communication-Efficient Distributed Dual Coordinate Ascent
Martin Jaggi
Virginia Smith
Martin Takáč
Jonathan Terhorst
S. Krishnan
Thomas Hofmann
Michael I. Jordan
82
353
0
04 Sep 2014
SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly
  Convex Composite Objectives
SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives
Aaron Defazio
Francis R. Bach
Simon Lacoste-Julien
ODL
131
1,823
0
01 Jul 2014
A Proximal Stochastic Gradient Method with Progressive Variance
  Reduction
A Proximal Stochastic Gradient Method with Progressive Variance Reduction
Lin Xiao
Tong Zhang
ODL
150
738
0
19 Mar 2014
Communication Efficient Distributed Optimization using an Approximate
  Newton-type Method
Communication Efficient Distributed Optimization using an Approximate Newton-type Method
Ohad Shamir
Nathan Srebro
Tong Zhang
86
556
0
30 Dec 2013
Making Gradient Descent Optimal for Strongly Convex Stochastic
  Optimization
Making Gradient Descent Optimal for Strongly Convex Stochastic Optimization
Alexander Rakhlin
Ohad Shamir
Karthik Sridharan
151
767
0
26 Sep 2011
Convergence Rates of Inexact Proximal-Gradient Methods for Convex
  Optimization
Convergence Rates of Inexact Proximal-Gradient Methods for Convex Optimization
Mark Schmidt
Nicolas Le Roux
Francis R. Bach
185
583
0
12 Sep 2011
A Unified Framework for High-Dimensional Analysis of M-Estimators with
  Decomposable Regularizers
A Unified Framework for High-Dimensional Analysis of M-Estimators with Decomposable Regularizers
S. Negahban
Pradeep Ravikumar
Martin J. Wainwright
Bin Yu
416
1,378
0
13 Oct 2010
Optimistic Rates for Learning with a Smooth Loss
Optimistic Rates for Learning with a Smooth Loss
Nathan Srebro
Karthik Sridharan
Ambuj Tewari
153
282
0
20 Sep 2010
Information-theoretic lower bounds on the oracle complexity of
  stochastic convex optimization
Information-theoretic lower bounds on the oracle complexity of stochastic convex optimization
Alekh Agarwal
Peter L. Bartlett
Pradeep Ravikumar
Martin J. Wainwright
173
250
0
03 Sep 2010
High-dimensional generalized linear models and the lasso
High-dimensional generalized linear models and the lasso
Sara van de Geer
563
755
0
04 Apr 2008
Sparse Additive Models
Sparse Additive Models
Pradeep Ravikumar
John D. Lafferty
Han Liu
Larry A. Wasserman
427
574
0
28 Nov 2007
1