ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.03801
  4. Cited By
SGD and Hogwild! Convergence Without the Bounded Gradients Assumption

SGD and Hogwild! Convergence Without the Bounded Gradients Assumption

11 February 2018
Lam M. Nguyen
Phuong Ha Nguyen
Marten van Dijk
Peter Richtárik
K. Scheinberg
Martin Takáč
ArXivPDFHTML

Papers citing "SGD and Hogwild! Convergence Without the Bounded Gradients Assumption"

50 / 137 papers shown
Title
On the Convergence to a Global Solution of Shuffling-Type Gradient
  Algorithms
On the Convergence to a Global Solution of Shuffling-Type Gradient Algorithms
Lam M. Nguyen
Trang H. Tran
32
2
0
13 Jun 2022
Variance Reduction is an Antidote to Byzantines: Better Rates, Weaker
  Assumptions and Communication Compression as a Cherry on the Top
Variance Reduction is an Antidote to Byzantines: Better Rates, Weaker Assumptions and Communication Compression as a Cherry on the Top
Eduard A. Gorbunov
Samuel Horváth
Peter Richtárik
Gauthier Gidel
AAML
19
0
0
01 Jun 2022
FedAvg with Fine Tuning: Local Updates Lead to Representation Learning
FedAvg with Fine Tuning: Local Updates Lead to Representation Learning
Liam Collins
Hamed Hassani
Aryan Mokhtari
Sanjay Shakkottai
FedML
32
75
0
27 May 2022
Learning from time-dependent streaming data with online stochastic
  algorithms
Learning from time-dependent streaming data with online stochastic algorithms
Antoine Godichon-Baggioni
Nicklas Werge
Olivier Wintenberger
35
3
0
25 May 2022
Local Stochastic Factored Gradient Descent for Distributed Quantum State
  Tomography
Local Stochastic Factored Gradient Descent for Distributed Quantum State Tomography
J. Kim
Taha Toghani
César A. Uribe
Anastasios Kyrillidis
35
3
0
22 Mar 2022
On Almost Sure Convergence Rates of Stochastic Gradient Methods
On Almost Sure Convergence Rates of Stochastic Gradient Methods
Jun Liu
Ye Yuan
21
36
0
09 Feb 2022
On Unbalanced Optimal Transport: Gradient Methods, Sparsity and
  Approximation Error
On Unbalanced Optimal Transport: Gradient Methods, Sparsity and Approximation Error
Quang Minh Nguyen
Hoang H. Nguyen
Yi Zhou
Lam M. Nguyen
OT
37
11
0
08 Feb 2022
Nesterov Accelerated Shuffling Gradient Method for Convex Optimization
Nesterov Accelerated Shuffling Gradient Method for Convex Optimization
Trang H. Tran
K. Scheinberg
Lam M. Nguyen
40
11
0
07 Feb 2022
Finite-Sum Optimization: A New Perspective for Convergence to a Global
  Solution
Finite-Sum Optimization: A New Perspective for Convergence to a Global Solution
Lam M. Nguyen
Trang H. Tran
Marten van Dijk
41
3
0
07 Feb 2022
On the Convergence of mSGD and AdaGrad for Stochastic Optimization
On the Convergence of mSGD and AdaGrad for Stochastic Optimization
Ruinan Jin
Yu Xing
Xingkang He
6
11
0
26 Jan 2022
AET-SGD: Asynchronous Event-triggered Stochastic Gradient Descent
AET-SGD: Asynchronous Event-triggered Stochastic Gradient Descent
Nhuong V. Nguyen
Song Han
21
2
0
27 Dec 2021
Decentralized Multi-Task Stochastic Optimization With Compressed
  Communications
Decentralized Multi-Task Stochastic Optimization With Compressed Communications
Navjot Singh
Xuanyu Cao
Suhas Diggavi
Tamer Basar
18
9
0
23 Dec 2021
On the Tradeoff between Energy, Precision, and Accuracy in Federated
  Quantized Neural Networks
On the Tradeoff between Energy, Precision, and Accuracy in Federated Quantized Neural Networks
Minsu Kim
Walid Saad
Mohammad Mozaffari
Merouane Debbah
FedML
MQ
19
23
0
15 Nov 2021
Persia: An Open, Hybrid System Scaling Deep Learning-based Recommenders
  up to 100 Trillion Parameters
Persia: An Open, Hybrid System Scaling Deep Learning-based Recommenders up to 100 Trillion Parameters
Xiangru Lian
Binhang Yuan
Xuefeng Zhu
Yulong Wang
Yongjun He
...
Lei Yuan
Hai-bo Yu
Sen Yang
Ce Zhang
Ji Liu
VLM
33
34
0
10 Nov 2021
Accelerated Almost-Sure Convergence Rates for Nonconvex Stochastic
  Gradient Descent using Stochastic Learning Rates
Accelerated Almost-Sure Convergence Rates for Nonconvex Stochastic Gradient Descent using Stochastic Learning Rates
Theodoros Mamalis
D. Stipanović
R. Tao
26
2
0
25 Oct 2021
Accelerating Perturbed Stochastic Iterates in Asynchronous Lock-Free
  Optimization
Accelerating Perturbed Stochastic Iterates in Asynchronous Lock-Free Optimization
Kaiwen Zhou
Anthony Man-Cho So
James Cheng
19
1
0
30 Sep 2021
Doubly Adaptive Scaled Algorithm for Machine Learning Using Second-Order
  Information
Doubly Adaptive Scaled Algorithm for Machine Learning Using Second-Order Information
Majid Jahani
S. Rusakov
Zheng Shi
Peter Richtárik
Michael W. Mahoney
Martin Takávc
ODL
24
25
0
11 Sep 2021
Asynchronous Iterations in Optimization: New Sequence Results and
  Sharper Algorithmic Guarantees
Asynchronous Iterations in Optimization: New Sequence Results and Sharper Algorithmic Guarantees
Hamid Reza Feyzmahdavian
M. Johansson
32
20
0
09 Sep 2021
Optimizing the Numbers of Queries and Replies in Federated Learning with
  Differential Privacy
Optimizing the Numbers of Queries and Replies in Federated Learning with Differential Privacy
Yipeng Zhou
Xuezheng Liu
Yao Fu
Di Wu
Chao Li
Shui Yu
FedML
32
2
0
05 Jul 2021
BAGUA: Scaling up Distributed Learning with System Relaxations
BAGUA: Scaling up Distributed Learning with System Relaxations
Shaoduo Gan
Xiangru Lian
Rui Wang
Jianbin Chang
Chengjun Liu
...
Jiawei Jiang
Binhang Yuan
Sen Yang
Ji Liu
Ce Zhang
25
30
0
03 Jul 2021
Stochastic Gradient Descent-Ascent and Consensus Optimization for Smooth
  Games: Convergence Analysis under Expected Co-coercivity
Stochastic Gradient Descent-Ascent and Consensus Optimization for Smooth Games: Convergence Analysis under Expected Co-coercivity
Nicolas Loizou
Hugo Berard
Gauthier Gidel
Ioannis Mitliagkas
Simon Lacoste-Julien
29
53
0
30 Jun 2021
Distributed Learning and its Application for Time-Series Prediction
Distributed Learning and its Application for Time-Series Prediction
Nhuong V. Nguyen
Sybille Legitime
AI4TS
14
0
0
06 Jun 2021
SGD with Coordinate Sampling: Theory and Practice
SGD with Coordinate Sampling: Theory and Practice
Rémi Leluc
Franccois Portier
24
6
0
25 May 2021
Robust learning with anytime-guaranteed feedback
Robust learning with anytime-guaranteed feedback
Matthew J. Holland
OOD
14
0
0
24 May 2021
A Bregman Learning Framework for Sparse Neural Networks
A Bregman Learning Framework for Sparse Neural Networks
Leon Bungert
Tim Roith
Daniel Tenbrinck
Martin Burger
16
17
0
10 May 2021
Decentralized Federated Averaging
Decentralized Federated Averaging
Tao Sun
Dongsheng Li
Bao Wang
FedML
54
207
0
23 Apr 2021
Distributed Learning Systems with First-order Methods
Distributed Learning Systems with First-order Methods
Ji Liu
Ce Zhang
13
44
0
12 Apr 2021
Stochastic Reweighted Gradient Descent
Stochastic Reweighted Gradient Descent
Ayoub El Hanchi
D. Stephens
19
8
0
23 Mar 2021
FedDR -- Randomized Douglas-Rachford Splitting Algorithms for Nonconvex
  Federated Composite Optimization
FedDR -- Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization
Quoc Tran-Dinh
Nhan H. Pham
Dzung Phan
Lam M. Nguyen
FedML
24
39
0
05 Mar 2021
Instrumental Variable Value Iteration for Causal Offline Reinforcement
  Learning
Instrumental Variable Value Iteration for Causal Offline Reinforcement Learning
Luofeng Liao
Zuyue Fu
Zhuoran Yang
Yixin Wang
Mladen Kolar
Zhaoran Wang
OffRL
18
34
0
19 Feb 2021
AI-SARAH: Adaptive and Implicit Stochastic Recursive Gradient Methods
AI-SARAH: Adaptive and Implicit Stochastic Recursive Gradient Methods
Zheng Shi
Abdurakhmon Sadiev
Nicolas Loizou
Peter Richtárik
Martin Takávc
ODL
34
13
0
19 Feb 2021
Proactive DP: A Multple Target Optimization Framework for DP-SGD
Proactive DP: A Multple Target Optimization Framework for DP-SGD
Marten van Dijk
Nhuong V. Nguyen
Toan N. Nguyen
Lam M. Nguyen
Phuong Ha Nguyen
16
0
0
17 Feb 2021
Communication-Efficient Distributed Optimization with Quantized
  Preconditioners
Communication-Efficient Distributed Optimization with Quantized Preconditioners
Foivos Alimisis
Peter Davies
Dan Alistarh
18
16
0
14 Feb 2021
Training Federated GANs with Theoretical Guarantees: A Universal
  Aggregation Approach
Training Federated GANs with Theoretical Guarantees: A Universal Aggregation Approach
Yikai Zhang
Hui Qu
Qi Chang
Huidong Liu
Dimitris N. Metaxas
Chao Chen
FedML
19
12
0
09 Feb 2021
On the Practicality of Differential Privacy in Federated Learning by
  Tuning Iteration Times
On the Practicality of Differential Privacy in Federated Learning by Tuning Iteration Times
Yao Fu
Yipeng Zhou
Di Wu
Shui Yu
Yonggang Wen
Chao Li
FedML
31
9
0
11 Jan 2021
Better scalability under potentially heavy-tailed feedback
Better scalability under potentially heavy-tailed feedback
Matthew J. Holland
12
1
0
14 Dec 2020
SMG: A Shuffling Gradient-Based Method with Momentum
SMG: A Shuffling Gradient-Based Method with Momentum
Trang H. Tran
Lam M. Nguyen
Quoc Tran-Dinh
23
21
0
24 Nov 2020
Local SGD: Unified Theory and New Efficient Methods
Local SGD: Unified Theory and New Efficient Methods
Eduard A. Gorbunov
Filip Hanzely
Peter Richtárik
FedML
37
109
0
03 Nov 2020
Hogwild! over Distributed Local Data Sets with Linearly Increasing
  Mini-Batch Sizes
Hogwild! over Distributed Local Data Sets with Linearly Increasing Mini-Batch Sizes
Marten van Dijk
Nhuong V. Nguyen
Toan N. Nguyen
Lam M. Nguyen
Quoc Tran-Dinh
Phuong Ha Nguyen
FedML
42
10
0
27 Oct 2020
Linearly Converging Error Compensated SGD
Linearly Converging Error Compensated SGD
Eduard A. Gorbunov
D. Kovalev
Dmitry Makarenko
Peter Richtárik
163
78
0
23 Oct 2020
Optimization for Supervised Machine Learning: Randomized Algorithms for
  Data and Parameters
Optimization for Supervised Machine Learning: Randomized Algorithms for Data and Parameters
Filip Hanzely
34
0
0
26 Aug 2020
Asynchronous Federated Learning with Reduced Number of Rounds and with
  Differential Privacy from Less Aggregated Gaussian Noise
Asynchronous Federated Learning with Reduced Number of Rounds and with Differential Privacy from Less Aggregated Gaussian Noise
Marten van Dijk
Nhuong V. Nguyen
Toan N. Nguyen
Lam M. Nguyen
Quoc Tran-Dinh
Phuong Ha Nguyen
FedML
16
28
0
17 Jul 2020
Stochastic Hamiltonian Gradient Methods for Smooth Games
Stochastic Hamiltonian Gradient Methods for Smooth Games
Nicolas Loizou
Hugo Berard
Alexia Jolicoeur-Martineau
Pascal Vincent
Simon Lacoste-Julien
Ioannis Mitliagkas
39
50
0
08 Jul 2020
Linear Convergent Decentralized Optimization with Compression
Linear Convergent Decentralized Optimization with Compression
Xiaorui Liu
Yao Li
Rongrong Wang
Jiliang Tang
Ming Yan
10
45
0
01 Jul 2020
Advances in Asynchronous Parallel and Distributed Optimization
Advances in Asynchronous Parallel and Distributed Optimization
By Mahmoud Assran
Arda Aytekin
Hamid Reza Feyzmahdavian
M. Johansson
Michael G. Rabbat
21
76
0
24 Jun 2020
Unified Analysis of Stochastic Gradient Methods for Composite Convex and
  Smooth Optimization
Unified Analysis of Stochastic Gradient Methods for Composite Convex and Smooth Optimization
Ahmed Khaled
Othmane Sebbouh
Nicolas Loizou
Robert Mansel Gower
Peter Richtárik
11
46
0
20 Jun 2020
SGD for Structured Nonconvex Functions: Learning Rates, Minibatching and
  Interpolation
SGD for Structured Nonconvex Functions: Learning Rates, Minibatching and Interpolation
Robert Mansel Gower
Othmane Sebbouh
Nicolas Loizou
25
74
0
18 Jun 2020
Almost sure convergence rates for Stochastic Gradient Descent and
  Stochastic Heavy Ball
Almost sure convergence rates for Stochastic Gradient Descent and Stochastic Heavy Ball
Othmane Sebbouh
Robert Mansel Gower
Aaron Defazio
6
22
0
14 Jun 2020
Random Reshuffling: Simple Analysis with Vast Improvements
Random Reshuffling: Simple Analysis with Vast Improvements
Konstantin Mishchenko
Ahmed Khaled
Peter Richtárik
37
131
0
10 Jun 2020
Asymptotic Analysis of Conditioned Stochastic Gradient Descent
Asymptotic Analysis of Conditioned Stochastic Gradient Descent
Rémi Leluc
Franccois Portier
17
2
0
04 Jun 2020
Previous
123
Next