ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1901.08022
  4. Cited By
A Universally Optimal Multistage Accelerated Stochastic Gradient Method

A Universally Optimal Multistage Accelerated Stochastic Gradient Method

23 January 2019
N. Aybat
Alireza Fallah
Mert Gurbuzbalaban
Asuman Ozdaglar
    ODL
ArXivPDFHTML

Papers citing "A Universally Optimal Multistage Accelerated Stochastic Gradient Method"

12 / 12 papers shown
Title
On the Performance Analysis of Momentum Method: A Frequency Domain Perspective
On the Performance Analysis of Momentum Method: A Frequency Domain Perspective
Xianliang Li
Jun Luo
Zhiwei Zheng
Hanxiao Wang
Li Luo
Lingkun Wen
Linlong Wu
Sheng Xu
74
0
0
29 Nov 2024
First Order Methods with Markovian Noise: from Acceleration to
  Variational Inequalities
First Order Methods with Markovian Noise: from Acceleration to Variational Inequalities
Aleksandr Beznosikov
S. Samsonov
Marina Sheshukova
Alexander Gasnikov
A. Naumov
Eric Moulines
52
14
0
25 May 2023
Uniform-in-Time Wasserstein Stability Bounds for (Noisy) Stochastic
  Gradient Descent
Uniform-in-Time Wasserstein Stability Bounds for (Noisy) Stochastic Gradient Descent
Lingjiong Zhu
Mert Gurbuzbalaban
Anant Raj
Umut Simsekli
34
6
0
20 May 2023
Tradeoffs between convergence rate and noise amplification for
  momentum-based accelerated optimization algorithms
Tradeoffs between convergence rate and noise amplification for momentum-based accelerated optimization algorithms
Hesameddin Mohammadi
Meisam Razaviyayn
Mihailo R. Jovanović
39
7
0
24 Sep 2022
Convex Programs and Lyapunov Functions for Reinforcement Learning: A
  Unified Perspective on the Analysis of Value-Based Methods
Convex Programs and Lyapunov Functions for Reinforcement Learning: A Unified Perspective on the Analysis of Value-Based Methods
Xing-ming Guo
Bin Hu
OffRL
30
3
0
14 Feb 2022
Towards Noise-adaptive, Problem-adaptive (Accelerated) Stochastic
  Gradient Descent
Towards Noise-adaptive, Problem-adaptive (Accelerated) Stochastic Gradient Descent
Sharan Vaswani
Benjamin Dubois-Taine
Reza Babanezhad
53
11
0
21 Oct 2021
FedChain: Chained Algorithms for Near-Optimal Communication Cost in
  Federated Learning
FedChain: Chained Algorithms for Near-Optimal Communication Cost in Federated Learning
Charlie Hou
K. K. Thekumparampil
Giulia Fanti
Sewoong Oh
FedML
39
14
0
16 Aug 2021
Differentially Private Accelerated Optimization Algorithms
Differentially Private Accelerated Optimization Algorithms
Nurdan Kuru
cS. .Ilker Birbil
Mert Gurbuzbalaban
S. Yıldırım
30
23
0
05 Aug 2020
Robust Distributed Accelerated Stochastic Gradient Methods for
  Multi-Agent Networks
Robust Distributed Accelerated Stochastic Gradient Methods for Multi-Agent Networks
Alireza Fallah
Mert Gurbuzbalaban
Asuman Ozdaglar
Umut Simsekli
Lingjiong Zhu
34
28
0
19 Oct 2019
The Step Decay Schedule: A Near Optimal, Geometrically Decaying Learning
  Rate Procedure For Least Squares
The Step Decay Schedule: A Near Optimal, Geometrically Decaying Learning Rate Procedure For Least Squares
Rong Ge
Sham Kakade
Rahul Kidambi
Praneeth Netrapalli
37
150
0
29 Apr 2019
Estimate Sequences for Stochastic Composite Optimization: Variance
  Reduction, Acceleration, and Robustness to Noise
Estimate Sequences for Stochastic Composite Optimization: Variance Reduction, Acceleration, and Robustness to Noise
A. Kulunchakov
Julien Mairal
34
44
0
25 Jan 2019
Accelerated Linear Convergence of Stochastic Momentum Methods in
  Wasserstein Distances
Accelerated Linear Convergence of Stochastic Momentum Methods in Wasserstein Distances
Bugra Can
Mert Gurbuzbalaban
Lingjiong Zhu
21
45
0
22 Jan 2019
1