ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1902.00340
  4. Cited By
Decentralized Stochastic Optimization and Gossip Algorithms with
  Compressed Communication

Decentralized Stochastic Optimization and Gossip Algorithms with Compressed Communication

1 February 2019
Anastasia Koloskova
Sebastian U. Stich
Martin Jaggi
    FedML
ArXivPDFHTML

Papers citing "Decentralized Stochastic Optimization and Gossip Algorithms with Compressed Communication"

47 / 97 papers shown
Title
Decentralized Composite Optimization with Compression
Decentralized Composite Optimization with Compression
Yao Li
Xiaorui Liu
Jiliang Tang
Ming Yan
Kun Yuan
27
9
0
10 Aug 2021
Decentralized Federated Learning: Balancing Communication and Computing
  Costs
Decentralized Federated Learning: Balancing Communication and Computing Costs
Wei Liu
Li Chen
Wenyi Zhang
FedML
19
106
0
26 Jul 2021
A Field Guide to Federated Optimization
A Field Guide to Federated Optimization
Jianyu Wang
Zachary B. Charles
Zheng Xu
Gauri Joshi
H. B. McMahan
...
Mi Zhang
Tong Zhang
Chunxiang Zheng
Chen Zhu
Wennan Zhu
FedML
187
412
0
14 Jul 2021
BAGUA: Scaling up Distributed Learning with System Relaxations
BAGUA: Scaling up Distributed Learning with System Relaxations
Shaoduo Gan
Xiangru Lian
Rui Wang
Jianbin Chang
Chengjun Liu
...
Jiawei Jiang
Binhang Yuan
Sen Yang
Ji Liu
Ce Zhang
25
30
0
03 Jul 2021
ResIST: Layer-Wise Decomposition of ResNets for Distributed Training
ResIST: Layer-Wise Decomposition of ResNets for Distributed Training
Chen Dun
Cameron R. Wolfe
C. Jermaine
Anastasios Kyrillidis
16
21
0
02 Jul 2021
The Values Encoded in Machine Learning Research
The Values Encoded in Machine Learning Research
Abeba Birhane
Pratyusha Kalluri
Dallas Card
William Agnew
Ravit Dotan
Michelle Bao
38
274
0
29 Jun 2021
Decentralized Constrained Optimization: Double Averaging and Gradient
  Projection
Decentralized Constrained Optimization: Double Averaging and Gradient Projection
Firooz Shahriari-Mehr
David Bosch
Ashkan Panahi
16
8
0
21 Jun 2021
Decentralized Local Stochastic Extra-Gradient for Variational
  Inequalities
Decentralized Local Stochastic Extra-Gradient for Variational Inequalities
Aleksandr Beznosikov
Pavel Dvurechensky
Anastasia Koloskova
V. Samokhin
Sebastian U. Stich
Alexander Gasnikov
32
43
0
15 Jun 2021
PPT: A Privacy-Preserving Global Model Training Protocol for Federated
  Learning in P2P Networks
PPT: A Privacy-Preserving Global Model Training Protocol for Federated Learning in P2P Networks
Qian Chen
Zilong Wang
Wenjing Zhang
Xiaodong Lin
FedML
33
16
0
30 May 2021
Towards Demystifying Serverless Machine Learning Training
Towards Demystifying Serverless Machine Learning Training
Jiawei Jiang
Shaoduo Gan
Yue Liu
Fanlin Wang
Gustavo Alonso
Ana Klimovic
Ankit Singla
Wentao Wu
Ce Zhang
19
121
0
17 May 2021
Innovation Compression for Communication-efficient Distributed
  Optimization with Linear Convergence
Innovation Compression for Communication-efficient Distributed Optimization with Linear Convergence
Jiaqi Zhang
Keyou You
Lihua Xie
8
32
0
14 May 2021
Regret and Cumulative Constraint Violation Analysis for Distributed
  Online Constrained Convex Optimization
Regret and Cumulative Constraint Violation Analysis for Distributed Online Constrained Convex Optimization
Xinlei Yi
Xiuxian Li
Tao Yang
Lihua Xie
Tianyou Chai
Karl H. Johansson
13
39
0
01 May 2021
DataLens: Scalable Privacy Preserving Training via Gradient Compression
  and Aggregation
DataLens: Scalable Privacy Preserving Training via Gradient Compression and Aggregation
Wei Ping
Fan Wu
Yunhui Long
Luka Rimanic
Ce Zhang
Bo-wen Li
FedML
45
63
0
20 Mar 2021
Moshpit SGD: Communication-Efficient Decentralized Training on
  Heterogeneous Unreliable Devices
Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable Devices
Max Ryabinin
Eduard A. Gorbunov
Vsevolod Plokhotnyuk
Gennady Pekhimenko
35
32
0
04 Mar 2021
On the Utility of Gradient Compression in Distributed Training Systems
On the Utility of Gradient Compression in Distributed Training Systems
Saurabh Agarwal
Hongyi Wang
Shivaram Venkataraman
Dimitris Papailiopoulos
31
46
0
28 Feb 2021
Wirelessly Powered Federated Edge Learning: Optimal Tradeoffs Between
  Convergence and Power Transfer
Wirelessly Powered Federated Edge Learning: Optimal Tradeoffs Between Convergence and Power Transfer
Qunsong Zeng
Yuqing Du
Kaibin Huang
37
35
0
24 Feb 2021
Communication-efficient Distributed Cooperative Learning with Compressed
  Beliefs
Communication-efficient Distributed Cooperative Learning with Compressed Beliefs
Taha Toghani
César A. Uribe
27
15
0
14 Feb 2021
Sparse-Push: Communication- & Energy-Efficient Decentralized Distributed
  Learning over Directed & Time-Varying Graphs with non-IID Datasets
Sparse-Push: Communication- & Energy-Efficient Decentralized Distributed Learning over Directed & Time-Varying Graphs with non-IID Datasets
Sai Aparna Aketi
Amandeep Singh
J. Rabaey
29
10
0
10 Feb 2021
Consensus Control for Decentralized Deep Learning
Consensus Control for Decentralized Deep Learning
Lingjing Kong
Tao R. Lin
Anastasia Koloskova
Martin Jaggi
Sebastian U. Stich
19
76
0
09 Feb 2021
Federated Learning over Wireless Device-to-Device Networks: Algorithms
  and Convergence Analysis
Federated Learning over Wireless Device-to-Device Networks: Algorithms and Convergence Analysis
Hong Xing
Osvaldo Simeone
Suzhi Bi
45
92
0
29 Jan 2021
IPLS : A Framework for Decentralized Federated Learning
IPLS : A Framework for Decentralized Federated Learning
C. Pappas
Dimitris Chatzopoulos
S. Lalis
M. Vavalis
VLM
27
66
0
06 Jan 2021
Decentralized Federated Learning via Mutual Knowledge Transfer
Decentralized Federated Learning via Mutual Knowledge Transfer
Chengxi Li
Gang Li
P. Varshney
FedML
26
106
0
24 Dec 2020
On the Benefits of Multiple Gossip Steps in Communication-Constrained
  Decentralized Optimization
On the Benefits of Multiple Gossip Steps in Communication-Constrained Decentralized Optimization
Abolfazl Hashemi
Anish Acharya
Rudrajit Das
H. Vikalo
Sujay Sanghavi
Inderjit Dhillon
20
7
0
20 Nov 2020
A Linearly Convergent Algorithm for Decentralized Optimization: Sending
  Less Bits for Free!
A Linearly Convergent Algorithm for Decentralized Optimization: Sending Less Bits for Free!
D. Kovalev
Anastasia Koloskova
Martin Jaggi
Peter Richtárik
Sebastian U. Stich
23
73
0
03 Nov 2020
Throughput-Optimal Topology Design for Cross-Silo Federated Learning
Throughput-Optimal Topology Design for Cross-Silo Federated Learning
Othmane Marfoq
Chuan Xu
Giovanni Neglia
Richard Vidal
FedML
67
85
0
23 Oct 2020
FedAT: A High-Performance and Communication-Efficient Federated Learning
  System with Asynchronous Tiers
FedAT: A High-Performance and Communication-Efficient Federated Learning System with Asynchronous Tiers
Zheng Chai
Yujing Chen
Ali Anwar
Liang Zhao
Yue Cheng
Huzefa Rangwala
FedML
26
122
0
12 Oct 2020
Sparse Communication for Training Deep Networks
Sparse Communication for Training Deep Networks
Negar Foroutan
Martin Jaggi
FedML
30
16
0
19 Sep 2020
On Communication Compression for Distributed Optimization on
  Heterogeneous Data
On Communication Compression for Distributed Optimization on Heterogeneous Data
Sebastian U. Stich
50
23
0
04 Sep 2020
Periodic Stochastic Gradient Descent with Momentum for Decentralized
  Training
Periodic Stochastic Gradient Descent with Momentum for Decentralized Training
Hongchang Gao
Heng-Chiao Huang
23
25
0
24 Aug 2020
PowerGossip: Practical Low-Rank Communication Compression in
  Decentralized Deep Learning
PowerGossip: Practical Low-Rank Communication Compression in Decentralized Deep Learning
Thijs Vogels
Sai Praneeth Karimireddy
Martin Jaggi
FedML
11
54
0
04 Aug 2020
On stochastic mirror descent with interacting particles: convergence
  properties and variance reduction
On stochastic mirror descent with interacting particles: convergence properties and variance reduction
Anastasia Borovykh
N. Kantas
P. Parpas
G. Pavliotis
28
12
0
15 Jul 2020
Robust Federated Learning: The Case of Affine Distribution Shifts
Robust Federated Learning: The Case of Affine Distribution Shifts
Amirhossein Reisizadeh
Farzan Farnia
Ramtin Pedarsani
Ali Jadbabaie
FedML
OOD
32
162
0
16 Jun 2020
Optimal Complexity in Decentralized Training
Optimal Complexity in Decentralized Training
Yucheng Lu
Christopher De Sa
38
72
0
15 Jun 2020
Communication-Efficient Distributed Stochastic AUC Maximization with
  Deep Neural Networks
Communication-Efficient Distributed Stochastic AUC Maximization with Deep Neural Networks
Zhishuai Guo
Mingrui Liu
Zhuoning Yuan
Li Shen
Wei Liu
Tianbao Yang
33
42
0
05 May 2020
A Robust Gradient Tracking Method for Distributed Optimization over
  Directed Networks
A Robust Gradient Tracking Method for Distributed Optimization over Directed Networks
Shi Pu
29
38
0
31 Mar 2020
A Unified Theory of Decentralized SGD with Changing Topology and Local
  Updates
A Unified Theory of Decentralized SGD with Changing Topology and Local Updates
Anastasia Koloskova
Nicolas Loizou
Sadra Boreiri
Martin Jaggi
Sebastian U. Stich
FedML
41
493
0
23 Mar 2020
Decentralized gradient methods: does topology matter?
Decentralized gradient methods: does topology matter?
Giovanni Neglia
Chuan Xu
Don Towsley
G. Calbi
16
50
0
28 Feb 2020
Gradient tracking and variance reduction for decentralized optimization
  and machine learning
Gradient tracking and variance reduction for decentralized optimization and machine learning
Ran Xin
S. Kar
U. Khan
19
10
0
13 Feb 2020
Q-GADMM: Quantized Group ADMM for Communication Efficient Decentralized
  Machine Learning
Q-GADMM: Quantized Group ADMM for Communication Efficient Decentralized Machine Learning
Anis Elgabli
Jihong Park
Amrit Singh Bedi
Chaouki Ben Issaid
M. Bennis
Vaneet Aggarwal
24
67
0
23 Oct 2019
Communication-Efficient Local Decentralized SGD Methods
Communication-Efficient Local Decentralized SGD Methods
Xiang Li
Wenhao Yang
Shusen Wang
Zhihua Zhang
30
53
0
21 Oct 2019
Robust Distributed Accelerated Stochastic Gradient Methods for
  Multi-Agent Networks
Robust Distributed Accelerated Stochastic Gradient Methods for Multi-Agent Networks
Alireza Fallah
Mert Gurbuzbalaban
Asuman Ozdaglar
Umut Simsekli
Lingjiong Zhu
32
28
0
19 Oct 2019
Clustered Federated Learning: Model-Agnostic Distributed Multi-Task
  Optimization under Privacy Constraints
Clustered Federated Learning: Model-Agnostic Distributed Multi-Task Optimization under Privacy Constraints
Felix Sattler
K. Müller
Wojciech Samek
FedML
69
966
0
04 Oct 2019
Gradient Descent with Compressed Iterates
Gradient Descent with Compressed Iterates
Ahmed Khaled
Peter Richtárik
21
22
0
10 Sep 2019
Robust and Communication-Efficient Collaborative Learning
Robust and Communication-Efficient Collaborative Learning
Amirhossein Reisizadeh
Hossein Taheri
Aryan Mokhtari
Hamed Hassani
Ramtin Pedarsani
25
89
0
24 Jul 2019
Asymptotic Network Independence in Distributed Stochastic Optimization
  for Machine Learning
Asymptotic Network Independence in Distributed Stochastic Optimization for Machine Learning
Shi Pu
Alexander Olshevsky
I. Paschalidis
28
41
0
28 Jun 2019
MATCHA: Speeding Up Decentralized SGD via Matching Decomposition
  Sampling
MATCHA: Speeding Up Decentralized SGD via Matching Decomposition Sampling
Jianyu Wang
Anit Kumar Sahu
Zhouyi Yang
Gauri Joshi
S. Kar
29
159
0
23 May 2019
Optimal Distributed Online Prediction using Mini-Batches
Optimal Distributed Online Prediction using Mini-Batches
O. Dekel
Ran Gilad-Bachrach
Ohad Shamir
Lin Xiao
179
683
0
07 Dec 2010
Previous
12