ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2105.04851
  4. Cited By
Improving the Transient Times for Distributed Stochastic Gradient
  Methods

Improving the Transient Times for Distributed Stochastic Gradient Methods

11 May 2021
Kun-Yen Huang
Shi Pu
ArXivPDFHTML

Papers citing "Improving the Transient Times for Distributed Stochastic Gradient Methods"

16 / 16 papers shown
Title
Fully Stochastic Primal-dual Gradient Algorithm for Non-convex
  Optimization on Random Graphs
Fully Stochastic Primal-dual Gradient Algorithm for Non-convex Optimization on Random Graphs
Chung-Yiu Yau
Haoming Liu
Hoi-To Wai
34
0
0
24 Oct 2024
Adjacent Leader Decentralized Stochastic Gradient Descent
Adjacent Leader Decentralized Stochastic Gradient Descent
Haoze He
Jing Wang
A. Choromańska
35
0
0
18 May 2024
Convergence of Decentralized Stochastic Subgradient-based Methods for Nonsmooth Nonconvex functions
Convergence of Decentralized Stochastic Subgradient-based Methods for Nonsmooth Nonconvex functions
Siyuan Zhang
Nachuan Xiao
Xin Liu
61
1
0
18 Mar 2024
An Accelerated Distributed Stochastic Gradient Method with Momentum
An Accelerated Distributed Stochastic Gradient Method with Momentum
Kun-Yen Huang
Shi Pu
Angelia Nedić
35
8
0
15 Feb 2024
Locally Differentially Private Gradient Tracking for Distributed Online
  Learning over Directed Graphs
Locally Differentially Private Gradient Tracking for Distributed Online Learning over Directed Graphs
Ziqin Chen
Yongqiang Wang
FedML
24
2
0
24 Oct 2023
Distributed Random Reshuffling Methods with Improved Convergence
Distributed Random Reshuffling Methods with Improved Convergence
Kun-Yen Huang
Linli Zhou
Shi Pu
24
4
0
21 Jun 2023
Distributed Stochastic Optimization under a General Variance Condition
Distributed Stochastic Optimization under a General Variance Condition
Kun-Yen Huang
Xiao Li
Shin-Yi Pu
FedML
43
6
0
30 Jan 2023
CEDAS: A Compressed Decentralized Stochastic Gradient Method with
  Improved Convergence
CEDAS: A Compressed Decentralized Stochastic Gradient Method with Improved Convergence
Kun-Yen Huang
Shin-Yi Pu
38
9
0
14 Jan 2023
Beyond spectral gap: The role of the topology in decentralized learning
Beyond spectral gap: The role of the topology in decentralized learning
Thijs Vogels
Hadrien Hendrikx
Martin Jaggi
FedML
11
27
0
07 Jun 2022
Refined Convergence and Topology Learning for Decentralized SGD with
  Heterogeneous Data
Refined Convergence and Topology Learning for Decentralized SGD with Heterogeneous Data
B. L. Bars
A. Bellet
Marc Tommasi
Erick Lavoie
Anne-Marie Kermarrec
FedML
32
25
0
09 Apr 2022
Distributed Random Reshuffling over Networks
Distributed Random Reshuffling over Networks
Kun-Yen Huang
Xiao Li
Andre Milzarek
Shi Pu
Junwen Qiu
39
11
0
31 Dec 2021
BlueFog: Make Decentralized Algorithms Practical for Optimization and
  Deep Learning
BlueFog: Make Decentralized Algorithms Practical for Optimization and Deep Learning
Bicheng Ying
Kun Yuan
Hanbin Hu
Yiming Chen
W. Yin
FedML
39
27
0
08 Nov 2021
Exponential Graph is Provably Efficient for Decentralized Deep Training
Exponential Graph is Provably Efficient for Decentralized Deep Training
Bicheng Ying
Kun Yuan
Yiming Chen
Hanbin Hu
Pan Pan
W. Yin
FedML
39
83
0
26 Oct 2021
A Unified and Refined Convergence Analysis for Non-Convex Decentralized
  Learning
A Unified and Refined Convergence Analysis for Non-Convex Decentralized Learning
Sulaiman A. Alghunaim
Kun Yuan
42
57
0
19 Oct 2021
Removing Data Heterogeneity Influence Enhances Network Topology
  Dependence of Decentralized SGD
Removing Data Heterogeneity Influence Enhances Network Topology Dependence of Decentralized SGD
Kun Yuan
Sulaiman A. Alghunaim
Xinmeng Huang
31
32
0
17 May 2021
Swarming for Faster Convergence in Stochastic Optimization
Swarming for Faster Convergence in Stochastic Optimization
Shi Pu
Alfredo García
37
16
0
11 Jun 2018
1