Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1901.07445
Cited By
v1
v2 (latest)
Accelerated Linear Convergence of Stochastic Momentum Methods in Wasserstein Distances
22 January 2019
Bugra Can
Mert Gurbuzbalaban
Lingjiong Zhu
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Accelerated Linear Convergence of Stochastic Momentum Methods in Wasserstein Distances"
19 / 19 papers shown
Title
A Universally Optimal Multistage Accelerated Stochastic Gradient Method
N. Aybat
Alireza Fallah
Mert Gurbuzbalaban
Asuman Ozdaglar
ODL
68
56
0
23 Jan 2019
A Tail-Index Analysis of Stochastic Gradient Noise in Deep Neural Networks
Umut Simsekli
Levent Sagun
Mert Gurbuzbalaban
100
252
0
18 Jan 2019
Breaking Reversibility Accelerates Langevin Dynamics for Global Non-Convex Optimization
Xuefeng Gao
Mert Gurbuzbalaban
Lingjiong Zhu
56
31
0
19 Dec 2018
Accelerated Gossip via Stochastic Heavy Ball Method
Nicolas Loizou
Peter Richtárik
44
27
0
23 Sep 2018
Robust Accelerated Gradient Methods for Smooth Strongly Convex Functions
N. Aybat
Alireza Fallah
Mert Gurbuzbalaban
Asuman Ozdaglar
48
59
0
27 May 2018
Momentum and Stochastic Momentum for Stochastic Gradient, Newton, Proximal Point and Subspace Descent Methods
Nicolas Loizou
Peter Richtárik
73
201
0
27 Dec 2017
Linearly convergent stochastic heavy ball method for minimizing generalization error
Nicolas Loizou
Peter Richtárik
111
45
0
30 Oct 2017
Bridging the Gap between Constant Step Size Stochastic Gradient Descent and Markov Chains
Aymeric Dieuleveut
Alain Durmus
Francis R. Bach
62
156
0
20 Jul 2017
Accelerating Stochastic Gradient Descent For Least Squares Regression
Prateek Jain
Sham Kakade
Rahul Kidambi
Praneeth Netrapalli
Aaron Sidford
68
84
0
26 Apr 2017
Non-convex learning via Stochastic Gradient Langevin Dynamics: a nonasymptotic analysis
Maxim Raginsky
Alexander Rakhlin
Matus Telgarsky
73
521
0
13 Feb 2017
Stochastic Heavy Ball
S. Gadat
Fabien Panloup
Sofiane Saadane
102
104
0
14 Sep 2016
Local Minimax Complexity of Stochastic Convex Optimization
Yuancheng Zhu
S. Chatterjee
John C. Duchi
John D. Lafferty
26
31
0
24 May 2016
Unified Convergence Analysis of Stochastic Momentum Methods for Convex and Non-convex Optimization
Tianbao Yang
Qihang Lin
Zhe Li
61
122
0
12 Apr 2016
Harder, Better, Faster, Stronger Convergence Rates for Least-Squares Regression
Aymeric Dieuleveut
Nicolas Flammarion
Francis R. Bach
ODL
54
227
0
17 Feb 2016
Adding Gradient Noise Improves Learning for Very Deep Networks
Arvind Neelakantan
Luke Vilnis
Quoc V. Le
Ilya Sutskever
Lukasz Kaiser
Karol Kurach
James Martens
AI4CE
ODL
83
545
0
21 Nov 2015
From Averaging to Acceleration, There is Only a Step-size
Nicolas Flammarion
Francis R. Bach
94
139
0
07 Apr 2015
Escaping From Saddle Points --- Online Stochastic Gradient for Tensor Decomposition
Rong Ge
Furong Huang
Chi Jin
Yang Yuan
140
1,059
0
06 Mar 2015
A Differential Equation for Modeling Nesterov's Accelerated Gradient Method: Theory and Insights
Weijie Su
Stephen P. Boyd
Emmanuel J. Candes
162
1,173
0
04 Mar 2015
Convex Optimization: Algorithms and Complexity
Sébastien Bubeck
82
112
0
20 May 2014
1