Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1802.03801
Cited By
SGD and Hogwild! Convergence Without the Bounded Gradients Assumption
11 February 2018
Lam M. Nguyen
Phuong Ha Nguyen
Marten van Dijk
Peter Richtárik
K. Scheinberg
Martin Takáč
Re-assign community
ArXiv
PDF
HTML
Papers citing
"SGD and Hogwild! Convergence Without the Bounded Gradients Assumption"
50 / 137 papers shown
Title
γ
γ
γ
-FedHT: Stepsize-Aware Hard-Threshold Gradient Compression in Federated Learning
Rongwei Lu
Yutong Jiang
Jinrui Zhang
Chunyang Li
Yifei Zhu
Bin Chen
Zhi Wang
FedML
7
0
0
18 May 2025
Stochastic Gradient Descent in Non-Convex Problems: Asymptotic Convergence with Relaxed Step-Size via Stopping Time Methods
Ruinan Jin
Difei Cheng
Hong Qiao
Xin Shi
Shaodong Liu
Bo Zhang
43
0
0
17 Apr 2025
Biased Federated Learning under Wireless Heterogeneity
Muhammad Faraz Ul Abrar
Nicolò Michelusi
FedML
49
0
0
08 Mar 2025
Ringmaster ASGD: The First Asynchronous SGD with Optimal Time Complexity
A. Maranjyan
A. Tyurin
Peter Richtárik
42
2
0
28 Jan 2025
PBM-VFL: Vertical Federated Learning with Feature and Sample Privacy
Linh Tran
Timothy Castiglia
Stacy Patterson
Ana Milanova
FedML
45
0
0
23 Jan 2025
Revisiting LocalSGD and SCAFFOLD: Improved Rates and Missing Analysis
Ruichen Luo
Sebastian U Stich
Samuel Horváth
Martin Takáč
43
0
0
08 Jan 2025
Beyond adaptive gradient: Fast-Controlled Minibatch Algorithm for large-scale optimization
Corrado Coppola
Lorenzo Papa
Irene Amerini
L. Palagi
ODL
84
0
0
24 Nov 2024
Equitable Federated Learning with Activation Clustering
Antesh Upadhyay
Abolfazl Hashemi
FedML
26
0
0
24 Oct 2024
An Attention-Based Algorithm for Gravity Adaptation Zone Calibration
Chen Yu
24
0
0
06 Oct 2024
MindFlayer: Efficient Asynchronous Parallel SGD in the Presence of Heterogeneous and Random Worker Compute Times
A. Maranjyan
Omar Shaikh Omar
Peter Richtárik
31
3
0
05 Oct 2024
On the SAGA algorithm with decreasing step
Luis Fredes
Bernard Bercu
Eméric Gbaguidi
29
1
0
02 Oct 2024
FADAS: Towards Federated Adaptive Asynchronous Optimization
Yujia Wang
Shiqiang Wang
Songtao Lu
Jinghui Chen
FedML
42
3
0
25 Jul 2024
Stochastic Polyak Step-sizes and Momentum: Convergence Guarantees and Practical Performance
Dimitris Oikonomou
Nicolas Loizou
55
4
0
06 Jun 2024
Demystifying SGD with Doubly Stochastic Gradients
Kyurae Kim
Joohwan Ko
Yian Ma
Jacob R. Gardner
53
0
0
03 Jun 2024
Local Methods with Adaptivity via Scaling
Saveliy Chezhegov
Sergey Skorik
Nikolas Khachaturov
Danil Shalagin
A. Avetisyan
Aleksandr Beznosikov
Martin Takáč
Yaroslav Kholodov
Alexander Gasnikov
61
2
0
02 Jun 2024
Federated Learning with Bilateral Curation for Partially Class-Disjoint Data
Ziqing Fan
Ruipeng Zhang
Jiangchao Yao
Bo Han
Ya Zhang
Yanfeng Wang
FedML
40
12
0
29 May 2024
Freya PAGE: First Optimal Time Complexity for Large-Scale Nonconvex Finite-Sum Optimization with Heterogeneous Asynchronous Computations
A. Tyurin
Kaja Gruntkowska
Peter Richtárik
44
3
0
24 May 2024
Stochastic Constrained Decentralized Optimization for Machine Learning with Fewer Data Oracles: a Gradient Sliding Approach
Hoang Huy Nguyen
Yan Li
Tuo Zhao
29
1
0
03 Apr 2024
On the Last-Iterate Convergence of Shuffling Gradient Methods
Zijian Liu
Zhengyuan Zhou
42
2
0
12 Mar 2024
Shuffling Momentum Gradient Algorithm for Convex Optimization
Trang H. Tran
Quoc Tran-Dinh
Lam M. Nguyen
32
1
0
05 Mar 2024
Remove that Square Root: A New Efficient Scale-Invariant Version of AdaGrad
Sayantan Choudhury
N. Tupitsa
Nicolas Loizou
Samuel Horváth
Martin Takáč
Eduard A. Gorbunov
30
1
0
05 Mar 2024
On the Convergence of Federated Learning Algorithms without Data Similarity
Ali Beikmohammadi
Sarit Khirirat
Sindri Magnússon
FedML
41
1
0
29 Feb 2024
Provably Scalable Black-Box Variational Inference with Structured Variational Families
Joohwan Ko
Kyurae Kim
W. Kim
Jacob R. Gardner
BDL
33
2
0
19 Jan 2024
A New Random Reshuffling Method for Nonsmooth Nonconvex Finite-sum Optimization
Junwen Qiu
Xiao Li
Andre Milzarek
39
2
0
02 Dec 2023
On Adaptive Stochastic Optimization for Streaming Data: A Newton's Method with O(dN) Operations
Antoine Godichon-Baggioni
Nicklas Werge
ODL
40
3
0
29 Nov 2023
Adaptive Step Sizes for Preconditioned Stochastic Gradient Descent
Frederik Köhne
Leonie Kreis
Anton Schiela
Roland A. Herzog
19
1
0
28 Nov 2023
Accelerating Large Batch Training via Gradient Signal to Noise Ratio (GSNR)
Guo-qing Jiang
Jinlong Liu
Zixiang Ding
Lin Guo
W. Lin
AI4CE
26
1
0
24 Sep 2023
Improved Convergence Analysis and SNR Control Strategies for Federated Learning in the Presence of Noise
Antesh Upadhyay
Abolfazl Hashemi
42
9
0
14 Jul 2023
Towards a Better Theoretical Understanding of Independent Subnetwork Training
Egor Shulgin
Peter Richtárik
AI4CE
32
6
0
28 Jun 2023
No Wrong Turns: The Simple Geometry Of Neural Networks Optimization Paths
Charles Guille-Escuret
Hiroki Naganuma
Kilian Fatras
Ioannis Mitliagkas
18
3
0
20 Jun 2023
Acceleration of stochastic gradient descent with momentum by averaging: finite-sample rates and asymptotic normality
Kejie Tang
Weidong Liu
Yichen Zhang
Xi Chen
21
2
0
28 May 2023
On the Convergence of Black-Box Variational Inference
Kyurae Kim
Jisu Oh
Kaiwen Wu
Yi Ma
Jacob R. Gardner
BDL
48
15
0
24 May 2023
Breaking the Curse of Quality Saturation with User-Centric Ranking
Zhuokai Zhao
Yang Yang
Wenyu Wang
Chi-Yu Liu
Yunluo Shi
Wenjie Hu
Haotian Zhang
Shuangjun Yang
48
3
0
24 May 2023
Layer-wise Adaptive Step-Sizes for Stochastic First-Order Methods for Deep Learning
Achraf Bahamou
D. Goldfarb
ODL
36
0
0
23 May 2023
Practical and Matching Gradient Variance Bounds for Black-Box Variational Bayesian Inference
Kyurae Kim
Kaiwen Wu
Jisu Oh
Jacob R. Gardner
BDL
28
7
0
18 Mar 2023
Considerations on the Theory of Training Models with Differential Privacy
Marten van Dijk
Phuong Ha Nguyen
FedML
31
2
0
08 Mar 2023
Combating Exacerbated Heterogeneity for Robust Models in Federated Learning
Jianing Zhu
Jiangchao Yao
Tongliang Liu
Quanming Yao
Jianliang Xu
Bo Han
FedML
40
5
0
01 Mar 2023
Maximum Likelihood With a Time Varying Parameter
Alberto Lanconelli
Christopher S. A. Lauria
13
3
0
28 Feb 2023
Generalizing DP-SGD with Shuffling and Batch Clipping
Marten van Dijk
Phuong Ha Nguyen
Toan N. Nguyen
Lam M. Nguyen
17
1
0
12 Dec 2022
A note on diffusion limits for stochastic gradient descent
Alberto Lanconelli
Christopher S. A. Lauria
DiffM
22
1
0
20 Oct 2022
Quantization for decentralized learning under subspace constraints
Roula Nassif
Stefan Vlaski
Marco Carpentiero
Vincenzo Matta
Marc Antonini
Ali H. Sayed
30
29
0
16 Sep 2022
Convergence of Batch Updating Methods with Approximate Gradients and/or Noisy Measurements: Theory and Computational Results
Tadipatri Uday
M. Vidyasagar
25
0
0
12 Sep 2022
Flexible Vertical Federated Learning with Heterogeneous Parties
Timothy Castiglia
Shiqiang Wang
S. Patterson
FedML
42
34
0
26 Aug 2022
Adaptive Learning Rates for Faster Stochastic Gradient Methods
Samuel Horváth
Konstantin Mishchenko
Peter Richtárik
ODL
41
7
0
10 Aug 2022
Green, Quantized Federated Learning over Wireless Networks: An Energy-Efficient Design
Minsu Kim
Walid Saad
Mohammad Mozaffari
Merouane Debbah
FedML
MQ
23
27
0
19 Jul 2022
A General Theory for Federated Optimization with Asynchronous and Heterogeneous Clients Updates
Yann Fraboni
Richard Vidal
Laetitia Kameni
Marco Lorenzi
FedML
27
24
0
21 Jun 2022
Finding Optimal Policy for Queueing Models: New Parameterization
Trang H. Tran
Lam M. Nguyen
K. Scheinberg
OffRL
20
2
0
21 Jun 2022
Compressed-VFL: Communication-Efficient Learning with Vertically Partitioned Data
Timothy Castiglia
Anirban Das
Shiqiang Wang
S. Patterson
FedML
27
48
0
16 Jun 2022
Sharper Convergence Guarantees for Asynchronous SGD for Distributed and Federated Learning
Anastasia Koloskova
Sebastian U. Stich
Martin Jaggi
FedML
30
78
0
16 Jun 2022
Markov Chain Score Ascent: A Unifying Framework of Variational Inference with Markovian Gradients
Kyurae Kim
Jisu Oh
Jacob R. Gardner
Adji Bousso Dieng
Hongseok Kim
BDL
37
8
0
13 Jun 2022
1
2
3
Next