ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1903.04488
  4. Cited By
Communication-efficient distributed SGD with Sketching

Communication-efficient distributed SGD with Sketching

12 March 2019
Nikita Ivkin
D. Rothchild
Enayat Ullah
Vladimir Braverman
Ion Stoica
R. Arora
    FedML
ArXivPDFHTML

Papers citing "Communication-efficient distributed SGD with Sketching"

43 / 43 papers shown
Title
Sketched Adaptive Federated Deep Learning: A Sharp Convergence Analysis
Sketched Adaptive Federated Deep Learning: A Sharp Convergence Analysis
Zhijie Chen
Qiaobo Li
A. Banerjee
FedML
42
0
0
11 Nov 2024
Loss Gradient Gaussian Width based Generalization and Optimization Guarantees
Loss Gradient Gaussian Width based Generalization and Optimization Guarantees
A. Banerjee
Qiaobo Li
Yingxue Zhou
52
0
0
11 Jun 2024
DoCoFL: Downlink Compression for Cross-Device Federated Learning
DoCoFL: Downlink Compression for Cross-Device Federated Learning
Ron Dorfman
S. Vargaftik
Y. Ben-Itzhak
Kfir Y. Levy
FedML
37
19
0
01 Feb 2023
M22: A Communication-Efficient Algorithm for Federated Learning Inspired
  by Rate-Distortion
M22: A Communication-Efficient Algorithm for Federated Learning Inspired by Rate-Distortion
Yangyi Liu
Stefano Rini
Sadaf Salehkalaibar
Jun Chen
FedML
21
4
0
23 Jan 2023
Navigation as Attackers Wish? Towards Building Robust Embodied Agents
  under Federated Learning
Navigation as Attackers Wish? Towards Building Robust Embodied Agents under Federated Learning
Yunchao Zhang
Zonglin Di
KAI-QING Zhou
Cihang Xie
Xin Eric Wang
FedML
AAML
34
2
0
27 Nov 2022
Analysis of Error Feedback in Federated Non-Convex Optimization with
  Biased Compression
Analysis of Error Feedback in Federated Non-Convex Optimization with Biased Compression
Xiaoyun Li
Ping Li
FedML
39
4
0
25 Nov 2022
Asymptotics of the Sketched Pseudoinverse
Asymptotics of the Sketched Pseudoinverse
Daniel LeJeune
Pratik V. Patil
Hamid Javadi
Richard G. Baraniuk
R. Tibshirani
27
10
0
07 Nov 2022
FedGRec: Federated Graph Recommender System with Lazy Update of Latent
  Embeddings
FedGRec: Federated Graph Recommender System with Lazy Update of Latent Embeddings
Junyi Li
Heng-Chiao Huang
FedML
24
6
0
25 Oct 2022
Communication-Efficient Adam-Type Algorithms for Distributed Data Mining
Communication-Efficient Adam-Type Algorithms for Distributed Data Mining
Wenhan Xian
Feihu Huang
Heng-Chiao Huang
FedML
35
0
0
14 Oct 2022
STSyn: Speeding Up Local SGD with Straggler-Tolerant Synchronization
STSyn: Speeding Up Local SGD with Straggler-Tolerant Synchronization
Feng Zhu
Jingjing Zhang
Xin Eric Wang
33
3
0
06 Oct 2022
Neurotoxin: Durable Backdoors in Federated Learning
Neurotoxin: Durable Backdoors in Federated Learning
Zhengming Zhang
Ashwinee Panda
Linyue Song
Yaoqing Yang
Michael W. Mahoney
Joseph E. Gonzalez
Kannan Ramchandran
Prateek Mittal
FedML
40
130
0
12 Jun 2022
Fine-tuning Language Models over Slow Networks using Activation
  Compression with Guarantees
Fine-tuning Language Models over Slow Networks using Activation Compression with Guarantees
Jue Wang
Binhang Yuan
Luka Rimanic
Yongjun He
Tri Dao
Beidi Chen
Christopher Ré
Ce Zhang
AI4CE
31
11
0
02 Jun 2022
Personalized Federated Learning with Server-Side Information
Personalized Federated Learning with Server-Side Information
Jaehun Song
Min Hwan Oh
Hyung-Sin Kim
FedML
35
8
0
23 May 2022
On the Convergence of Heterogeneous Federated Learning with Arbitrary
  Adaptive Online Model Pruning
On the Convergence of Heterogeneous Federated Learning with Arbitrary Adaptive Online Model Pruning
Hanhan Zhou
Tian-Shing Lan
Guru Venkataramani
Wenbo Ding
FedML
32
6
0
27 Jan 2022
Egeria: Efficient DNN Training with Knowledge-Guided Layer Freezing
Egeria: Efficient DNN Training with Knowledge-Guided Layer Freezing
Yiding Wang
D. Sun
Kai Chen
Fan Lai
Mosharaf Chowdhury
33
44
0
17 Jan 2022
Communication-Efficient Distributed SGD with Compressed Sensing
Communication-Efficient Distributed SGD with Compressed Sensing
Yujie Tang
V. Ramanathan
Junshan Zhang
Na Li
FedML
33
8
0
15 Dec 2021
FastSGD: A Fast Compressed SGD Framework for Distributed Machine
  Learning
FastSGD: A Fast Compressed SGD Framework for Distributed Machine Learning
Keyu Yang
Lu Chen
Zhihao Zeng
Yunjun Gao
28
9
0
08 Dec 2021
Comfetch: Federated Learning of Large Networks on Constrained Clients
  via Sketching
Comfetch: Federated Learning of Large Networks on Constrained Clients via Sketching
Tahseen Rabbani
Brandon Yushan Feng
Marco Bornstein
Kyle Rui Sang
Yifan Yang
Arjun Rajkumar
A. Varshney
Furong Huang
FedML
59
2
0
17 Sep 2021
EDEN: Communication-Efficient and Robust Distributed Mean Estimation for
  Federated Learning
EDEN: Communication-Efficient and Robust Distributed Mean Estimation for Federated Learning
S. Vargaftik
Ran Ben-Basat
Amit Portnoy
Gal Mendelson
Y. Ben-Itzhak
Michael Mitzenmacher
FedML
46
46
0
19 Aug 2021
ErrorCompensatedX: error compensation for variance reduced algorithms
ErrorCompensatedX: error compensation for variance reduced algorithms
Hanlin Tang
Yao Li
Ji Liu
Ming Yan
32
10
0
04 Aug 2021
BAGUA: Scaling up Distributed Learning with System Relaxations
BAGUA: Scaling up Distributed Learning with System Relaxations
Shaoduo Gan
Xiangru Lian
Rui Wang
Jianbin Chang
Chengjun Liu
...
Jiawei Jiang
Binhang Yuan
Sen Yang
Ji Liu
Ce Zhang
31
30
0
03 Jul 2021
Escaping Saddle Points with Compressed SGD
Escaping Saddle Points with Compressed SGD
Dmitrii Avdiukhin
G. Yaroslavtsev
22
4
0
21 May 2021
From Distributed Machine Learning to Federated Learning: A Survey
From Distributed Machine Learning to Federated Learning: A Survey
Ji Liu
Jizhou Huang
Yang Zhou
Xuhong Li
Shilei Ji
Haoyi Xiong
Dejing Dou
FedML
OOD
56
244
0
29 Apr 2021
ScaleCom: Scalable Sparsified Gradient Compression for
  Communication-Efficient Distributed Training
ScaleCom: Scalable Sparsified Gradient Compression for Communication-Efficient Distributed Training
Chia-Yu Chen
Jiamin Ni
Songtao Lu
Xiaodong Cui
Pin-Yu Chen
...
Naigang Wang
Swagath Venkataramani
Vijayalakshmi Srinivasan
Wei Zhang
K. Gopalakrishnan
29
0
0
21 Apr 2021
On the Utility of Gradient Compression in Distributed Training Systems
On the Utility of Gradient Compression in Distributed Training Systems
Saurabh Agarwal
Hongyi Wang
Shivaram Venkataraman
Dimitris Papailiopoulos
41
46
0
28 Feb 2021
Federated Learning over Wireless Networks: A Band-limited Coordinated
  Descent Approach
Federated Learning over Wireless Networks: A Band-limited Coordinated Descent Approach
Junshan Zhang
Na Li
M. Dedeoglu
FedML
31
41
0
16 Feb 2021
1-bit Adam: Communication Efficient Large-Scale Training with Adam's
  Convergence Speed
1-bit Adam: Communication Efficient Large-Scale Training with Adam's Convergence Speed
Hanlin Tang
Shaoduo Gan
A. A. Awan
Samyam Rajbhandari
Conglong Li
Xiangru Lian
Ji Liu
Ce Zhang
Yuxiong He
AI4CE
45
84
0
04 Feb 2021
CADA: Communication-Adaptive Distributed Adam
CADA: Communication-Adaptive Distributed Adam
Tianyi Chen
Ziye Guo
Yuejiao Sun
W. Yin
ODL
14
24
0
31 Dec 2020
Sketch and Scale: Geo-distributed tSNE and UMAP
Sketch and Scale: Geo-distributed tSNE and UMAP
Viska Wei
Nikita Ivkin
Vladimir Braverman
A. Szalay
17
4
0
11 Nov 2020
Compression Boosts Differentially Private Federated Learning
Compression Boosts Differentially Private Federated Learning
Raouf Kerkouche
G. Ács
C. Castelluccia
P. Genevès
FedML
30
29
0
10 Nov 2020
DistDGL: Distributed Graph Neural Network Training for Billion-Scale
  Graphs
DistDGL: Distributed Graph Neural Network Training for Billion-Scale Graphs
Da Zheng
Chao Ma
Minjie Wang
Jinjing Zhou
Qidong Su
Xiang Song
Quan Gan
Zheng-Wei Zhang
George Karypis
FedML
GNN
27
243
0
11 Oct 2020
HeteroFL: Computation and Communication Efficient Federated Learning for
  Heterogeneous Clients
HeteroFL: Computation and Communication Efficient Federated Learning for Heterogeneous Clients
Enmao Diao
Jie Ding
Vahid Tarokh
FedML
46
546
0
03 Oct 2020
On Communication Compression for Distributed Optimization on
  Heterogeneous Data
On Communication Compression for Distributed Optimization on Heterogeneous Data
Sebastian U. Stich
53
23
0
04 Sep 2020
LotteryFL: Personalized and Communication-Efficient Federated Learning
  with Lottery Ticket Hypothesis on Non-IID Datasets
LotteryFL: Personalized and Communication-Efficient Federated Learning with Lottery Ticket Hypothesis on Non-IID Datasets
Ang Li
Jingwei Sun
Binghui Wang
Lin Duan
Sicheng Li
Yiran Chen
H. Li
FedML
28
125
0
07 Aug 2020
PowerGossip: Practical Low-Rank Communication Compression in
  Decentralized Deep Learning
PowerGossip: Practical Low-Rank Communication Compression in Decentralized Deep Learning
Thijs Vogels
Sai Praneeth Karimireddy
Martin Jaggi
FedML
11
54
0
04 Aug 2020
FetchSGD: Communication-Efficient Federated Learning with Sketching
FetchSGD: Communication-Efficient Federated Learning with Sketching
D. Rothchild
Ashwinee Panda
Enayat Ullah
Nikita Ivkin
Ion Stoica
Vladimir Braverman
Joseph E. Gonzalez
Raman Arora
FedML
33
361
0
15 Jul 2020
Federated Learning with Compression: Unified Analysis and Sharp
  Guarantees
Federated Learning with Compression: Unified Analysis and Sharp Guarantees
Farzin Haddadpour
Mohammad Mahdi Kamani
Aryan Mokhtari
M. Mahdavi
FedML
42
274
0
02 Jul 2020
Is Network the Bottleneck of Distributed Training?
Is Network the Bottleneck of Distributed Training?
Zhen Zhang
Chaokun Chang
Yanghua Peng
Yida Wang
R. Arora
Xin Jin
25
70
0
17 Jun 2020
Layer-wise Adaptive Gradient Sparsification for Distributed Deep
  Learning with Convergence Guarantees
Layer-wise Adaptive Gradient Sparsification for Distributed Deep Learning with Convergence Guarantees
S. Shi
Zhenheng Tang
Qiang-qiang Wang
Kaiyong Zhao
Xiaowen Chu
19
22
0
20 Nov 2019
Fast Machine Learning with Byzantine Workers and Servers
El-Mahdi El-Mhamdi
R. Guerraoui
Arsany Guirguis
12
3
0
18 Nov 2019
Distributed Learning of Deep Neural Networks using Independent Subnet
  Training
Distributed Learning of Deep Neural Networks using Independent Subnet Training
John Shelton Hyatt
Cameron R. Wolfe
Michael Lee
Yuxin Tang
Anastasios Kyrillidis
Christopher M. Jermaine
OOD
29
35
0
04 Oct 2019
OpenNMT: Open-Source Toolkit for Neural Machine Translation
OpenNMT: Open-Source Toolkit for Neural Machine Translation
Guillaume Klein
Yoon Kim
Yuntian Deng
Jean Senellart
Alexander M. Rush
273
1,896
0
10 Jan 2017
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
310
2,896
0
15 Sep 2016
1