ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.03093
  4. Cited By
Beyond spectral gap: The role of the topology in decentralized learning
v1v2 (latest)

Beyond spectral gap: The role of the topology in decentralized learning

7 June 2022
Thijs Vogels
Hadrien Hendrikx
Martin Jaggi
    FedML
ArXiv (abs)PDFHTML

Papers citing "Beyond spectral gap: The role of the topology in decentralized learning"

14 / 14 papers shown
Title
Faster Convergence with Less Communication: Broadcast-Based Subgraph Sampling for Decentralized Learning over Wireless Networks
Faster Convergence with Less Communication: Broadcast-Based Subgraph Sampling for Decentralized Learning over Wireless Networks
Daniel Pérez Herrera
Zheng Chen
Erik G. Larsson
124
1
0
24 Jan 2024
Data-heterogeneity-aware Mixing for Decentralized Learning
Data-heterogeneity-aware Mixing for Decentralized Learning
Yatin Dandi
Anastasia Koloskova
Martin Jaggi
Sebastian U. Stich
73
19
0
13 Apr 2022
Exponential Graph is Provably Efficient for Decentralized Deep Training
Exponential Graph is Provably Efficient for Decentralized Deep Training
Bicheng Ying
Kun Yuan
Yiming Chen
Hanbin Hu
Pan Pan
W. Yin
FedML
86
88
0
26 Oct 2021
RelaySum for Decentralized Deep Learning on Heterogeneous Data
RelaySum for Decentralized Deep Learning on Heterogeneous Data
Thijs Vogels
Lie He
Anastasia Koloskova
Tao R. Lin
Sai Praneeth Karimireddy
Sebastian U. Stich
Martin Jaggi
FedMLMoE
51
62
0
08 Oct 2021
Large Learning Rate Tames Homogeneity: Convergence and Balancing Effect
Large Learning Rate Tames Homogeneity: Convergence and Balancing Effect
Yuqing Wang
Minshuo Chen
T. Zhao
Molei Tao
AI4CE
98
42
0
07 Oct 2021
Quasi-Global Momentum: Accelerating Decentralized Deep Learning on
  Heterogeneous Data
Quasi-Global Momentum: Accelerating Decentralized Deep Learning on Heterogeneous Data
Tao R. Lin
Sai Praneeth Karimireddy
Sebastian U. Stich
Martin Jaggi
FedML
86
101
0
09 Feb 2021
Optimal Complexity in Decentralized Training
Optimal Complexity in Decentralized Training
Yucheng Lu
Christopher De Sa
88
75
0
15 Jun 2020
A Unified Theory of Decentralized SGD with Changing Topology and Local
  Updates
A Unified Theory of Decentralized SGD with Changing Topology and Local Updates
Anastasia Koloskova
Nicolas Loizou
Sadra Boreiri
Martin Jaggi
Sebastian U. Stich
FedML
85
514
0
23 Mar 2020
Decentralized gradient methods: does topology matter?
Decentralized gradient methods: does topology matter?
Giovanni Neglia
Chuan Xu
Don Towsley
G. Calbi
64
52
0
28 Feb 2020
MATCHA: Speeding Up Decentralized SGD via Matching Decomposition
  Sampling
MATCHA: Speeding Up Decentralized SGD via Matching Decomposition Sampling
Jianyu Wang
Anit Kumar Sahu
Zhouyi Yang
Gauri Joshi
S. Kar
68
163
0
23 May 2019
Stochastic Gradient Push for Distributed Deep Learning
Stochastic Gradient Push for Distributed Deep Learning
Mahmoud Assran
Nicolas Loizou
Nicolas Ballas
Michael G. Rabbat
79
348
0
27 Nov 2018
D$^2$: Decentralized Training over Decentralized Data
D2^22: Decentralized Training over Decentralized Data
Hanlin Tang
Xiangru Lian
Ming Yan
Ce Zhang
Ji Liu
39
352
0
19 Mar 2018
Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning
  Algorithms
Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms
Han Xiao
Kashif Rasul
Roland Vollgraf
283
8,904
0
25 Aug 2017
NEXT: In-Network Nonconvex Optimization
NEXT: In-Network Nonconvex Optimization
P. Lorenzo
G. Scutari
103
508
0
01 Feb 2016
1