Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1901.08689
Cited By
Don't Jump Through Hoops and Remove Those Loops: SVRG and Katyusha are Better Without the Outer Loop
24 January 2019
D. Kovalev
Samuel Horváth
Peter Richtárik
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Don't Jump Through Hoops and Remove Those Loops: SVRG and Katyusha are Better Without the Outer Loop"
32 / 32 papers shown
Title
Variance Reduction Methods Do Not Need to Compute Full Gradients: Improved Efficiency through Shuffling
Daniil Medyakov
Gleb Molodtsov
S. Chezhegov
Alexey Rebrikov
Aleksandr Beznosikov
103
0
0
21 Feb 2025
SOREL: A Stochastic Algorithm for Spectral Risks Minimization
Yuze Ge
Rujun Jiang
38
0
0
19 Jul 2024
Decentralized Sum-of-Nonconvex Optimization
Zhuanghua Liu
K. H. Low
21
0
0
04 Feb 2024
Correlated Quantization for Faster Nonconvex Distributed Optimization
Andrei Panferov
Yury Demidovich
Ahmad Rammal
Peter Richtárik
MQ
47
4
0
10 Jan 2024
Sarah Frank-Wolfe: Methods for Constrained Optimization with Best Rates and Practical Features
Aleksandr Beznosikov
David Dobre
Gauthier Gidel
25
5
0
23 Apr 2023
Stochastic Distributed Optimization under Average Second-order Similarity: Algorithms and Analysis
Dachao Lin
Yuze Han
Haishan Ye
Zhihua Zhang
25
11
0
15 Apr 2023
Gradient Descent-Type Methods: Background and Simple Unified Convergence Analysis
Quoc Tran-Dinh
Marten van Dijk
34
0
0
19 Dec 2022
BALPA: A Balanced Primal-Dual Algorithm for Nonsmooth Optimization with Application to Distributed Optimization
Luyao Guo
Jinde Cao
Xinli Shi
Shaofu Yang
12
0
0
06 Dec 2022
Federated Averaging Langevin Dynamics: Toward a unified theory and new algorithms
Vincent Plassier
Alain Durmus
Eric Moulines
FedML
21
6
0
31 Oct 2022
Smooth Monotone Stochastic Variational Inequalities and Saddle Point Problems: A Survey
Aleksandr Beznosikov
Boris Polyak
Eduard A. Gorbunov
D. Kovalev
Alexander Gasnikov
42
31
0
29 Aug 2022
Federated Random Reshuffling with Compression and Variance Reduction
Grigory Malinovsky
Peter Richtárik
FedML
27
10
0
08 May 2022
An Adaptive Incremental Gradient Method With Support for Non-Euclidean Norms
Binghui Xie
Chen Jin
Kaiwen Zhou
James Cheng
Wei Meng
40
1
0
28 Apr 2022
Stochastic Gradient Descent-Ascent: Unified Theory and New Efficient Methods
Aleksandr Beznosikov
Eduard A. Gorbunov
Hugo Berard
Nicolas Loizou
19
49
0
15 Feb 2022
Decentralized Stochastic Variance Reduced Extragradient Method
Luo Luo
Haishan Ye
27
7
0
01 Feb 2022
PAGE-PG: A Simple and Loopless Variance-Reduced Policy Gradient Method with Probabilistic Gradient Estimation
Matilde Gargiani
Andrea Zanelli
Andrea Martinelli
Tyler H. Summers
John Lygeros
33
14
0
01 Feb 2022
L-SVRG and L-Katyusha with Adaptive Sampling
Boxin Zhao
Boxiang Lyu
Mladen Kolar
21
3
0
31 Jan 2022
Decentralized Composite Optimization with Compression
Yao Li
Xiaorui Liu
Jiliang Tang
Ming Yan
Kun Yuan
27
9
0
10 Aug 2021
ANITA: An Optimal Loopless Accelerated Variance-Reduced Gradient Method
Zhize Li
43
14
0
21 Mar 2021
SVRG Meets AdaGrad: Painless Variance Reduction
Benjamin Dubois-Taine
Sharan Vaswani
Reza Babanezhad
Mark W. Schmidt
Simon Lacoste-Julien
18
18
0
18 Feb 2021
IntSGD: Adaptive Floatless Compression of Stochastic Gradients
Konstantin Mishchenko
Bokun Wang
D. Kovalev
Peter Richtárik
75
14
0
16 Feb 2021
PMGT-VR: A decentralized proximal-gradient algorithmic framework with variance reduction
Haishan Ye
Wei Xiong
Tong Zhang
16
16
0
30 Dec 2020
Local SGD: Unified Theory and New Efficient Methods
Eduard A. Gorbunov
Filip Hanzely
Peter Richtárik
FedML
35
109
0
03 Nov 2020
Optimization for Supervised Machine Learning: Randomized Algorithms for Data and Parameters
Filip Hanzely
34
0
0
26 Aug 2020
PAGE: A Simple and Optimal Probabilistic Gradient Estimator for Nonconvex Optimization
Zhize Li
Hongyan Bao
Xiangliang Zhang
Peter Richtárik
ODL
31
126
0
25 Aug 2020
Stochastic Hamiltonian Gradient Methods for Smooth Games
Nicolas Loizou
Hugo Berard
Alexia Jolicoeur-Martineau
Pascal Vincent
Simon Lacoste-Julien
Ioannis Mitliagkas
39
50
0
08 Jul 2020
Acceleration for Compressed Gradient Descent in Distributed and Federated Optimization
Zhize Li
D. Kovalev
Xun Qian
Peter Richtárik
FedML
AI4CE
29
135
0
26 Feb 2020
Sampling and Update Frequencies in Proximal Variance-Reduced Stochastic Gradient Methods
Martin Morin
Pontus Giselsson
27
4
0
13 Feb 2020
Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization
Samuel Horváth
Lihua Lei
Peter Richtárik
Michael I. Jordan
57
30
0
13 Feb 2020
Variance Reduced Coordinate Descent with Acceleration: New Method With a Surprising Application to Finite-Sum Problems
Filip Hanzely
D. Kovalev
Peter Richtárik
35
17
0
11 Feb 2020
Cocoercivity, Smoothness and Bias in Variance-Reduced Stochastic Gradient Methods
Martin Morin
Pontus Giselsson
20
2
0
21 Mar 2019
Momentum and Stochastic Momentum for Stochastic Gradient, Newton, Proximal Point and Subspace Descent Methods
Nicolas Loizou
Peter Richtárik
19
200
0
27 Dec 2017
Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning
Julien Mairal
79
317
0
18 Feb 2014
1