ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2202.03836
20
105

An Improved Analysis of Gradient Tracking for Decentralized Machine Learning

8 February 2022
Anastasia Koloskova
Tao R. Lin
Sebastian U. Stich
ArXivPDFHTML
Abstract

We consider decentralized machine learning over a network where the training data is distributed across nnn agents, each of which can compute stochastic model updates on their local data. The agent's common goal is to find a model that minimizes the average of all local loss functions. While gradient tracking (GT) algorithms can overcome a key challenge, namely accounting for differences between workers' local data distributions, the known convergence rates for GT algorithms are not optimal with respect to their dependence on the mixing parameter ppp (related to the spectral gap of the connectivity matrix). We provide a tighter analysis of the GT method in the stochastic strongly convex, convex and non-convex settings. We improve the dependency on ppp from O(p−2)\mathcal{O}(p^{-2})O(p−2) to O(p−1c−1)\mathcal{O}(p^{-1}c^{-1})O(p−1c−1) in the noiseless case and from O(p−3/2)\mathcal{O}(p^{-3/2})O(p−3/2) to O(p−1/2c−1)\mathcal{O}(p^{-1/2}c^{-1})O(p−1/2c−1) in the general stochastic case, where c≥pc \geq pc≥p is related to the negative eigenvalues of the connectivity matrix (and is a constant in most practical applications). This improvement was possible due to a new proof technique which could be of independent interest.

View on arXiv
Comments on this paper