ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1805.10579
  4. Cited By
Robust Accelerated Gradient Methods for Smooth Strongly Convex Functions
v1v2v3v4 (latest)

Robust Accelerated Gradient Methods for Smooth Strongly Convex Functions

27 May 2018
N. Aybat
Alireza Fallah
Mert Gurbuzbalaban
Asuman Ozdaglar
ArXiv (abs)PDFHTML

Papers citing "Robust Accelerated Gradient Methods for Smooth Strongly Convex Functions"

12 / 12 papers shown
Title
An Accelerated Algorithm for Stochastic Bilevel Optimization under Unbounded Smoothness
An Accelerated Algorithm for Stochastic Bilevel Optimization under Unbounded Smoothness
Xiaochuan Gong
Jie Hao
Mingrui Liu
98
3
0
28 Sep 2024
A Universally Optimal Multistage Accelerated Stochastic Gradient Method
A Universally Optimal Multistage Accelerated Stochastic Gradient Method
N. Aybat
Alireza Fallah
Mert Gurbuzbalaban
Asuman Ozdaglar
ODL
73
56
0
23 Jan 2019
Accelerated Linear Convergence of Stochastic Momentum Methods in
  Wasserstein Distances
Accelerated Linear Convergence of Stochastic Momentum Methods in Wasserstein Distances
Bugra Can
Mert Gurbuzbalaban
Lingjiong Zhu
65
45
0
22 Jan 2019
Breaking Reversibility Accelerates Langevin Dynamics for Global
  Non-Convex Optimization
Breaking Reversibility Accelerates Langevin Dynamics for Global Non-Convex Optimization
Xuefeng Gao
Mert Gurbuzbalaban
Lingjiong Zhu
56
31
0
19 Dec 2018
Global Convergence of Stochastic Gradient Hamiltonian Monte Carlo for
  Non-Convex Stochastic Optimization: Non-Asymptotic Performance Bounds and
  Momentum-Based Acceleration
Global Convergence of Stochastic Gradient Hamiltonian Monte Carlo for Non-Convex Stochastic Optimization: Non-Asymptotic Performance Bounds and Momentum-Based Acceleration
Xuefeng Gao
Mert Gurbuzbalaban
Lingjiong Zhu
56
60
0
12 Sep 2018
An Explicit Convergence Rate for Nesterov's Method from SDP
An Explicit Convergence Rate for Nesterov's Method from SDP
S. Safavi
Bikash Joshi
G. França
José Bento
38
9
0
13 Jan 2018
Bridging the Gap between Constant Step Size Stochastic Gradient Descent
  and Markov Chains
Bridging the Gap between Constant Step Size Stochastic Gradient Descent and Markov Chains
Aymeric Dieuleveut
Alain Durmus
Francis R. Bach
65
156
0
20 Jul 2017
Underdamped Langevin MCMC: A non-asymptotic analysis
Underdamped Langevin MCMC: A non-asymptotic analysis
Xiang Cheng
Niladri S. Chatterji
Peter L. Bartlett
Michael I. Jordan
89
301
0
12 Jul 2017
Non-convex learning via Stochastic Gradient Langevin Dynamics: a
  nonasymptotic analysis
Non-convex learning via Stochastic Gradient Langevin Dynamics: a nonasymptotic analysis
Maxim Raginsky
Alexander Rakhlin
Matus Telgarsky
73
521
0
13 Feb 2017
From Averaging to Acceleration, There is Only a Step-size
From Averaging to Acceleration, There is Only a Step-size
Nicolas Flammarion
Francis R. Bach
94
139
0
07 Apr 2015
Non-strongly-convex smooth stochastic approximation with convergence
  rate O(1/n)
Non-strongly-convex smooth stochastic approximation with convergence rate O(1/n)
Francis R. Bach
Eric Moulines
96
405
0
10 Jun 2013
Convergence Rates of Inexact Proximal-Gradient Methods for Convex
  Optimization
Convergence Rates of Inexact Proximal-Gradient Methods for Convex Optimization
Mark Schmidt
Nicolas Le Roux
Francis R. Bach
213
584
0
12 Sep 2011
1