ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2105.04851
21
23

Improving the Transient Times for Distributed Stochastic Gradient Methods

11 May 2021
Kun-Yen Huang
Shi Pu
ArXivPDFHTML
Abstract

We consider the distributed optimization problem where nnn agents each possessing a local cost function, collaboratively minimize the average of the nnn cost functions over a connected network. Assuming stochastic gradient information is available, we study a distributed stochastic gradient algorithm, called exact diffusion with adaptive stepsizes (EDAS) adapted from the Exact Diffusion method and NIDS and perform a non-asymptotic convergence analysis. We not only show that EDAS asymptotically achieves the same network independent convergence rate as centralized stochastic gradient descent (SGD) for minimizing strongly convex and smooth objective functions, but also characterize the transient time needed for the algorithm to approach the asymptotic convergence rate, which behaves as KT=O(n1−λ2)K_T=\mathcal{O}\left(\frac{n}{1-\lambda_2}\right)KT​=O(1−λ2​n​), where 1−λ21-\lambda_21−λ2​ stands for the spectral gap of the mixing matrix. To the best of our knowledge, EDAS achieves the shortest transient time when the average of the nnn cost functions is strongly convex and each cost function is smooth. Numerical simulations further corroborate and strengthen the obtained theoretical results.

View on arXiv
Comments on this paper