Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1504.01577
Cited By
From Averaging to Acceleration, There is Only a Step-size
7 April 2015
Nicolas Flammarion
Francis R. Bach
Re-assign community
ArXiv
PDF
HTML
Papers citing
"From Averaging to Acceleration, There is Only a Step-size"
29 / 29 papers shown
Title
Leveraging Continuous Time to Understand Momentum When Training Diagonal Linear Networks
Hristo Papazov
Scott Pesme
Nicolas Flammarion
38
5
0
08 Mar 2024
Provable Acceleration of Heavy Ball beyond Quadratics for a Class of Polyak-Łojasiewicz Functions when the Non-Convexity is Averaged-Out
Jun-Kun Wang
Chi-Heng Lin
Andre Wibisono
Bin Hu
38
20
0
22 Jun 2022
On the fast convergence of minibatch heavy ball momentum
Raghu Bollapragada
Tyler Chen
Rachel A. Ward
39
17
0
15 Jun 2022
Tight Convergence Rate Bounds for Optimization Under Power Law Spectral Conditions
Maksim Velikanov
Dmitry Yarotsky
22
6
0
02 Feb 2022
No-Regret Dynamics in the Fenchel Game: A Unified Framework for Algorithmic Convex Optimization
Jun-Kun Wang
Jacob D. Abernethy
Kfir Y. Levy
27
21
0
22 Nov 2021
Stable Anderson Acceleration for Deep Learning
Massimiliano Lupo Pasini
Junqi Yin
Viktor Reshniak
M. Stoyanov
28
4
0
26 Oct 2021
Revisiting the Role of Euler Numerical Integration on Acceleration and Stability in Convex Optimization
Peiyuan Zhang
Antonio Orvieto
Hadi Daneshmand
Thomas Hofmann
Roy S. Smith
32
9
0
23 Feb 2021
Noise and Fluctuation of Finite Learning Rate Stochastic Gradient Descent
Kangqiao Liu
Liu Ziyin
Masakuni Ueda
MLT
66
37
0
07 Dec 2020
Quickly Finding a Benign Region via Heavy Ball Momentum in Non-Convex Optimization
Jun-Kun Wang
Jacob D. Abernethy
24
7
0
04 Oct 2020
Differentially Private Accelerated Optimization Algorithms
Nurdan Kuru
cS. .Ilker Birbil
Mert Gurbuzbalaban
S. Yıldırım
30
23
0
05 Aug 2020
Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization
Samuel Horváth
Lihua Lei
Peter Richtárik
Michael I. Jordan
57
30
0
13 Feb 2020
On the Effectiveness of Richardson Extrapolation in Machine Learning
Francis R. Bach
13
9
0
07 Feb 2020
Robust Distributed Accelerated Stochastic Gradient Methods for Multi-Agent Networks
Alireza Fallah
Mert Gurbuzbalaban
Asuman Ozdaglar
Umut Simsekli
Lingjiong Zhu
32
28
0
19 Oct 2019
Conjugate Gradients and Accelerated Methods Unified: The Approximate Duality Gap View
Jelena Diakonikolas
L. Orecchia
26
1
0
29 Jun 2019
Reducing the variance in online optimization by transporting past gradients
Sébastien M. R. Arnold
Pierre-Antoine Manzagol
Reza Babanezhad
Ioannis Mitliagkas
Nicolas Le Roux
26
28
0
08 Jun 2019
On the Adaptivity of Stochastic Gradient-Based Optimization
Lihua Lei
Michael I. Jordan
ODL
16
22
0
09 Apr 2019
A Universally Optimal Multistage Accelerated Stochastic Gradient Method
N. Aybat
Alireza Fallah
Mert Gurbuzbalaban
Asuman Ozdaglar
ODL
18
57
0
23 Jan 2019
Accelerated Linear Convergence of Stochastic Momentum Methods in Wasserstein Distances
Bugra Can
Mert Gurbuzbalaban
Lingjiong Zhu
21
45
0
22 Jan 2019
Understanding the Acceleration Phenomenon via High-Resolution Differential Equations
Bin Shi
S. Du
Michael I. Jordan
Weijie J. Su
17
254
0
21 Oct 2018
Online Adaptive Methods, Universality and Acceleration
Kfir Y. Levy
A. Yurtsever
V. Cevher
ODL
28
89
0
08 Sep 2018
A Tight Convergence Analysis for Stochastic Gradient Descent with Delayed Updates
Yossi Arjevani
Ohad Shamir
Nathan Srebro
8
63
0
26 Jun 2018
Towards Riemannian Accelerated Gradient Methods
Hongyi Zhang
S. Sra
19
53
0
07 Jun 2018
Stochastic Composite Least-Squares Regression with convergence rate O(1/n)
Nicolas Flammarion
Francis R. Bach
27
27
0
21 Feb 2017
Parallelizing Stochastic Gradient Descent for Least Squares Regression: mini-batching, averaging, and model misspecification
Prateek Jain
Sham Kakade
Rahul Kidambi
Praneeth Netrapalli
Aaron Sidford
MoMe
21
36
0
12 Oct 2016
Stochastic Heavy Ball
S. Gadat
Fabien Panloup
Sofiane Saadane
18
103
0
14 Sep 2016
On the Iteration Complexity of Oblivious First-Order Optimization Algorithms
Yossi Arjevani
Ohad Shamir
26
33
0
11 May 2016
A Variational Perspective on Accelerated Methods in Optimization
Andre Wibisono
Ashia Wilson
Michael I. Jordan
34
569
0
14 Mar 2016
On the Influence of Momentum Acceleration on Online Learning
Kun Yuan
Bicheng Ying
Ali H. Sayed
37
58
0
14 Mar 2016
A Differential Equation for Modeling Nesterov's Accelerated Gradient Method: Theory and Insights
Weijie Su
Stephen P. Boyd
Emmanuel J. Candes
108
1,157
0
04 Mar 2015
1