Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1802.03801
Cited By
SGD and Hogwild! Convergence Without the Bounded Gradients Assumption
11 February 2018
Lam M. Nguyen
Phuong Ha Nguyen
Marten van Dijk
Peter Richtárik
K. Scheinberg
Martin Takáč
Re-assign community
ArXiv
PDF
HTML
Papers citing
"SGD and Hogwild! Convergence Without the Bounded Gradients Assumption"
37 / 137 papers shown
Title
Better scalability under potentially heavy-tailed gradients
Matthew J. Holland
6
1
0
01 Jun 2020
Stochastic Optimization with Heavy-Tailed Noise via Accelerated Gradient Clipping
Eduard A. Gorbunov
Marina Danilova
Alexander Gasnikov
11
115
0
21 May 2020
On the Convergence Analysis of Asynchronous SGD for Solving Consistent Linear Systems
Atal Narayan Sahu
Aritra Dutta
Aashutosh Tiwari
Peter Richtárik
16
5
0
05 Apr 2020
Stochastic Proximal Gradient Algorithm with Minibatches. Application to Large Scale Learning Models
A. Pătraşcu
C. Paduraru
Paul Irofti
25
0
0
30 Mar 2020
Finite-Time Analysis of Stochastic Gradient Descent under Markov Randomness
Thinh T. Doan
Lam M. Nguyen
Nhan H. Pham
Justin Romberg
10
21
0
24 Mar 2020
Stochastic Polyak Step-size for SGD: An Adaptive Learning Rate for Fast Convergence
Nicolas Loizou
Sharan Vaswani
I. Laradji
Simon Lacoste-Julien
27
181
0
24 Feb 2020
A Unified Convergence Analysis for Shuffling-Type Gradient Methods
Lam M. Nguyen
Quoc Tran-Dinh
Dzung Phan
Phuong Ha Nguyen
Marten van Dijk
39
78
0
19 Feb 2020
Elastic Consistency: A General Consistency Model for Distributed Stochastic Gradient Descent
Giorgi Nadiradze
Ilia Markov
Bapi Chatterjee
Vyacheslav Kungurtsev
Dan Alistarh
FedML
22
14
0
16 Jan 2020
Stochastic proximal splitting algorithm for composite minimization
A. Pătraşcu
Paul Irofti
15
1
0
04 Dec 2019
Stochastic Newton and Cubic Newton Methods with Simple Local Linear-Quadratic Rates
D. Kovalev
Konstantin Mishchenko
Peter Richtárik
ODL
11
44
0
03 Dec 2019
MindTheStep-AsyncPSGD: Adaptive Asynchronous Parallel Stochastic Gradient Descent
Karl Bäckström
Marina Papatriantafilou
P. Tsigas
12
11
0
08 Nov 2019
Error Lower Bounds of Constant Step-size Stochastic Gradient Descent
Zhiyan Ding
Yiding Chen
Qin Li
Xiaojin Zhu
17
4
0
18 Oct 2019
A Double Residual Compression Algorithm for Efficient Distributed Learning
Xiaorui Liu
Yao Li
Jiliang Tang
Ming Yan
14
49
0
16 Oct 2019
Randomized Iterative Methods for Linear Systems: Momentum, Inexactness and Gossip
Nicolas Loizou
27
5
0
26 Sep 2019
Mix and Match: An Optimistic Tree-Search Approach for Learning Models from Mixture Distributions
Matthew Faw
Rajat Sen
Karthikeyan Shanmugam
C. Caramanis
Sanjay Shakkottai
36
3
0
23 Jul 2019
Unified Optimal Analysis of the (Stochastic) Gradient Method
Sebastian U. Stich
26
112
0
09 Jul 2019
Qsparse-local-SGD: Distributed SGD with Quantization, Sparsification, and Local Computations
Debraj Basu
Deepesh Data
C. Karakuş
Suhas Diggavi
MQ
24
401
0
06 Jun 2019
A Generic Acceleration Framework for Stochastic Composite Optimization
A. Kulunchakov
Julien Mairal
11
43
0
03 Jun 2019
A Unified Theory of SGD: Variance Reduction, Sampling, Quantization and Coordinate Descent
Eduard A. Gorbunov
Filip Hanzely
Peter Richtárik
9
143
0
27 May 2019
Beyond Alternating Updates for Matrix Factorization with Inertial Bregman Proximal Gradient Algorithms
Mahesh Chandra Mukkamala
Peter Ochs
19
21
0
22 May 2019
Estimate Sequences for Variance-Reduced Stochastic Composite Optimization
A. Kulunchakov
Julien Mairal
6
27
0
07 May 2019
SGD without Replacement: Sharper Rates for General Smooth Convex Functions
Prateek Jain
Dheeraj M. Nagaraj
Praneeth Netrapalli
19
87
0
04 Mar 2019
Stochastic Gradient Descent for Nonconvex Learning without Bounded Gradient Assumptions
Yunwen Lei
Ting Hu
Guiying Li
K. Tang
MLT
21
115
0
03 Feb 2019
SGD: General Analysis and Improved Rates
Robert Mansel Gower
Nicolas Loizou
Xun Qian
Alibek Sailanbayev
Egor Shulgin
Peter Richtárik
34
376
0
27 Jan 2019
Trajectory Normalized Gradients for Distributed Optimization
Jianqiao Wangni
Ke Li
Jianbo Shi
Jitendra Malik
19
2
0
24 Jan 2019
Finite-Sum Smooth Optimization with SARAH
Lam M. Nguyen
Marten van Dijk
Dzung Phan
Phuong Ha Nguyen
Tsui-Wei Weng
Jayant Kalagnanam
28
22
0
22 Jan 2019
DTN: A Learning Rate Scheme with Convergence Rate of
O
(
1
/
t
)
\mathcal{O}(1/t)
O
(
1/
t
)
for SGD
Lam M. Nguyen
Phuong Ha Nguyen
Dzung Phan
Jayant Kalagnanam
Marten van Dijk
33
0
0
22 Jan 2019
New nonasymptotic convergence rates of stochastic proximal pointalgorithm for convex optimization problems
A. Pătraşcu
11
0
0
22 Jan 2019
Inexact SARAH Algorithm for Stochastic Optimization
Lam M. Nguyen
K. Scheinberg
Martin Takáč
14
50
0
25 Nov 2018
New Convergence Aspects of Stochastic Gradient Algorithms
Lam M. Nguyen
Phuong Ha Nguyen
Peter Richtárik
K. Scheinberg
Martin Takáč
Marten van Dijk
23
66
0
10 Nov 2018
Tight Dimension Independent Lower Bound on the Expected Convergence Rate for Diminishing Step Sizes in SGD
Phuong Ha Nguyen
Lam M. Nguyen
Marten van Dijk
LRM
12
31
0
10 Oct 2018
Characterization of Convex Objective Functions and Optimal Expected Convergence Rates for SGD
Marten van Dijk
Lam M. Nguyen
Phuong Ha Nguyen
Dzung Phan
36
6
0
09 Oct 2018
On the Acceleration of L-BFGS with Second-Order Information and Stochastic Batches
Jie Liu
Yu Rong
Martin Takáč
Junzhou Huang
ODL
35
7
0
14 Jul 2018
Random Shuffling Beats SGD after Finite Epochs
Jeff Z. HaoChen
S. Sra
8
98
0
26 Jun 2018
Distributed learning with compressed gradients
Sarit Khirirat
Hamid Reza Feyzmahdavian
M. Johansson
17
83
0
18 Jun 2018
The Convergence of Stochastic Gradient Descent in Asynchronous Shared Memory
Dan Alistarh
Christopher De Sa
Nikola Konstantinov
11
42
0
23 Mar 2018
The duality structure gradient descent algorithm: analysis and applications to neural networks
Thomas Flynn
17
1
0
01 Aug 2017
Previous
1
2
3