ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1801.02982
  4. Cited By
How To Make the Gradients Small Stochastically: Even Faster Convex and
  Nonconvex SGD

How To Make the Gradients Small Stochastically: Even Faster Convex and Nonconvex SGD

8 January 2018
Zeyuan Allen-Zhu
    ODL
ArXivPDFHTML

Papers citing "How To Make the Gradients Small Stochastically: Even Faster Convex and Nonconvex SGD"

38 / 38 papers shown
Title
Are Convex Optimization Curves Convex?
Are Convex Optimization Curves Convex?
Guy Barzilai
Ohad Shamir
Moslem Zamani
63
0
0
13 Mar 2025
Faster Acceleration for Steepest Descent
Faster Acceleration for Steepest Descent
Site Bai
Brian Bullins
ODL
49
0
0
28 Sep 2024
Non-Convex Stochastic Composite Optimization with Polyak Momentum
Non-Convex Stochastic Composite Optimization with Polyak Momentum
Yuan Gao
Anton Rodomanov
Sebastian U. Stich
44
7
0
05 Mar 2024
Optimal Guarantees for Algorithmic Reproducibility and Gradient
  Complexity in Convex Optimization
Optimal Guarantees for Algorithmic Reproducibility and Gradient Complexity in Convex Optimization
Liang Zhang
Junchi Yang
Amin Karbasi
Niao He
48
2
0
26 Oct 2023
DualFL: A Duality-based Federated Learning Algorithm with Communication
  Acceleration in the General Convex Regime
DualFL: A Duality-based Federated Learning Algorithm with Communication Acceleration in the General Convex Regime
Jongho Park
Jinchao Xu
FedML
80
1
0
17 May 2023
Lower Bounds and Accelerated Algorithms in Distributed Stochastic Optimization with Communication Compression
Lower Bounds and Accelerated Algorithms in Distributed Stochastic Optimization with Communication Compression
Yutong He
Xinmeng Huang
Yiming Chen
W. Yin
Kun Yuan
47
7
0
12 May 2023
Deterministic Nonsmooth Nonconvex Optimization
Deterministic Nonsmooth Nonconvex Optimization
Michael I. Jordan
Guy Kornowski
Tianyi Lin
Ohad Shamir
Manolis Zampetakis
59
26
0
16 Feb 2023
Two Losses Are Better Than One: Faster Optimization Using a Cheaper
  Proxy
Two Losses Are Better Than One: Faster Optimization Using a Cheaper Proxy
Blake E. Woodworth
Konstantin Mishchenko
Francis R. Bach
47
6
0
07 Feb 2023
Faster Gradient-Free Algorithms for Nonsmooth Nonconvex Stochastic
  Optimization
Faster Gradient-Free Algorithms for Nonsmooth Nonconvex Stochastic Optimization
Le‐Yu Chen
Jing Xu
Luo Luo
36
15
0
16 Jan 2023
Fisher information lower bounds for sampling
Fisher information lower bounds for sampling
Sinho Chewi
P. Gerber
Holden Lee
Chen Lu
66
15
0
05 Oct 2022
On the Complexity of Finding Small Subgradients in Nonsmooth Optimization
On the Complexity of Finding Small Subgradients in Nonsmooth Optimization
Guy Kornowski
Ohad Shamir
42
9
0
21 Sep 2022
Smooth Monotone Stochastic Variational Inequalities and Saddle Point
  Problems: A Survey
Smooth Monotone Stochastic Variational Inequalities and Saddle Point Problems: A Survey
Aleksandr Beznosikov
Boris Polyak
Eduard A. Gorbunov
D. Kovalev
Alexander Gasnikov
49
31
0
29 Aug 2022
Near-Optimal Algorithms for Making the Gradient Small in Stochastic Minimax Optimization
Near-Optimal Algorithms for Making the Gradient Small in Stochastic Minimax Optimization
Le‐Yu Chen
Luo Luo
59
7
0
11 Aug 2022
Tackling benign nonconvexity with smoothing and stochastic gradients
Tackling benign nonconvexity with smoothing and stochastic gradients
Harsh Vardhan
Sebastian U. Stich
54
8
0
18 Feb 2022
Sampling Approximately Low-Rank Ising Models: MCMC meets Variational
  Methods
Sampling Approximately Low-Rank Ising Models: MCMC meets Variational Methods
Frederic Koehler
Holden Lee
Andrej Risteski
42
22
0
17 Feb 2022
The Complexity of Nonconvex-Strongly-Concave Minimax Optimization
The Complexity of Nonconvex-Strongly-Concave Minimax Optimization
Siqi Zhang
Junchi Yang
Cristóbal Guzmán
Negar Kiyavash
Niao He
38
61
0
29 Mar 2021
Machine Unlearning via Algorithmic Stability
Machine Unlearning via Algorithmic Stability
Enayat Ullah
Tung Mai
Anup B. Rao
Ryan Rossi
R. Arora
40
104
0
25 Feb 2021
Parameter-free Locally Accelerated Conditional Gradients
Parameter-free Locally Accelerated Conditional Gradients
Alejandro Carderera
Jelena Diakonikolas
Cheuk Yin Lin
Sebastian Pokutta
32
7
0
12 Feb 2021
Potential Function-based Framework for Making the Gradients Small in
  Convex and Min-Max Optimization
Potential Function-based Framework for Making the Gradients Small in Convex and Min-Max Optimization
Jelena Diakonikolas
Puqian Wang
33
13
0
28 Jan 2021
Dual Averaging is Surprisingly Effective for Deep Learning Optimization
Dual Averaging is Surprisingly Effective for Deep Learning Optimization
Samy Jelassi
Aaron Defazio
48
5
0
20 Oct 2020
Second-Order Information in Non-Convex Stochastic Optimization: Power
  and Limitations
Second-Order Information in Non-Convex Stochastic Optimization: Power and Limitations
Yossi Arjevani
Y. Carmon
John C. Duchi
Dylan J. Foster
Ayush Sekhari
Karthik Sridharan
97
53
0
24 Jun 2020
Optimal Complexity in Decentralized Training
Optimal Complexity in Decentralized Training
Yucheng Lu
Christopher De Sa
48
72
0
15 Jun 2020
Halting Time is Predictable for Large Models: A Universality Property
  and Average-case Analysis
Halting Time is Predictable for Large Models: A Universality Property and Average-case Analysis
Courtney Paquette
B. V. Merrienboer
Elliot Paquette
Fabian Pedregosa
44
25
0
08 Jun 2020
Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization
Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization
Samuel Horváth
Lihua Lei
Peter Richtárik
Michael I. Jordan
66
30
0
13 Feb 2020
Lower Bounds for Non-Convex Stochastic Optimization
Lower Bounds for Non-Convex Stochastic Optimization
Yossi Arjevani
Y. Carmon
John C. Duchi
Dylan J. Foster
Nathan Srebro
Blake E. Woodworth
40
346
0
05 Dec 2019
The Complexity of Finding Stationary Points with Stochastic Gradient
  Descent
The Complexity of Finding Stationary Points with Stochastic Gradient Descent
Yoel Drori
Shigehito Shimizu
31
64
0
04 Oct 2019
Memory-Sample Tradeoffs for Linear Regression with Small Error
Memory-Sample Tradeoffs for Linear Regression with Small Error
Vatsal Sharan
Aaron Sidford
Gregory Valiant
28
35
0
18 Apr 2019
Noisy Matrix Completion: Understanding Statistical Guarantees for Convex
  Relaxation via Nonconvex Optimization
Noisy Matrix Completion: Understanding Statistical Guarantees for Convex Relaxation via Nonconvex Optimization
Yuxin Chen
Yuejie Chi
Jianqing Fan
Cong Ma
Yuling Yan
25
128
0
20 Feb 2019
Asymmetric Valleys: Beyond Sharp and Flat Local Minima
Asymmetric Valleys: Beyond Sharp and Flat Local Minima
Haowei He
Gao Huang
Yang Yuan
ODL
MLT
38
149
0
02 Feb 2019
Understanding the Acceleration Phenomenon via High-Resolution
  Differential Equations
Understanding the Acceleration Phenomenon via High-Resolution Differential Equations
Bin Shi
S. Du
Michael I. Jordan
Weijie J. Su
22
255
0
21 Oct 2018
Stochastic model-based minimization of weakly convex functions
Stochastic model-based minimization of weakly convex functions
Damek Davis
Dmitriy Drusvyatskiy
40
372
0
17 Mar 2018
Stochastic subgradient method converges at the rate $O(k^{-1/4})$ on
  weakly convex functions
Stochastic subgradient method converges at the rate O(k−1/4)O(k^{-1/4})O(k−1/4) on weakly convex functions
Damek Davis
Dmitriy Drusvyatskiy
23
100
0
08 Feb 2018
Natasha 2: Faster Non-Convex Optimization Than SGD
Natasha 2: Faster Non-Convex Optimization Than SGD
Zeyuan Allen-Zhu
ODL
45
245
0
29 Aug 2017
Natasha: Faster Non-Convex Stochastic Optimization Via Strongly
  Non-Convex Parameter
Natasha: Faster Non-Convex Stochastic Optimization Via Strongly Non-Convex Parameter
Zeyuan Allen-Zhu
37
80
0
02 Feb 2017
Adaptive Accelerated Gradient Converging Methods under Holderian Error
  Bound Condition
Adaptive Accelerated Gradient Converging Methods under Holderian Error Bound Condition
Mingrui Liu
Tianbao Yang
50
15
0
23 Nov 2016
Accelerate Stochastic Subgradient Method by Leveraging Local Growth
  Condition
Accelerate Stochastic Subgradient Method by Leveraging Local Growth Condition
Yi Tian Xu
Qihang Lin
Tianbao Yang
33
11
0
04 Jul 2016
Katyusha: The First Direct Acceleration of Stochastic Gradient Methods
Katyusha: The First Direct Acceleration of Stochastic Gradient Methods
Zeyuan Allen-Zhu
ODL
52
579
0
18 Mar 2016
A Proximal Stochastic Gradient Method with Progressive Variance
  Reduction
A Proximal Stochastic Gradient Method with Progressive Variance Reduction
Lin Xiao
Tong Zhang
ODL
93
738
0
19 Mar 2014
1