ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1805.08768
  4. Cited By
Sparse Binary Compression: Towards Distributed Deep Learning with
  minimal Communication

Sparse Binary Compression: Towards Distributed Deep Learning with minimal Communication

22 May 2018
Felix Sattler
Simon Wiedemann
K. Müller
Wojciech Samek
    MQ
ArXivPDFHTML

Papers citing "Sparse Binary Compression: Towards Distributed Deep Learning with minimal Communication"

31 / 31 papers shown
Title
Sparsification Under Siege: Defending Against Poisoning Attacks in Communication-Efficient Federated Learning
Sparsification Under Siege: Defending Against Poisoning Attacks in Communication-Efficient Federated Learning
Zhiyong Jin
Runhua Xu
Chong Li
Y. Liu
Jianxin Li
AAML
FedML
39
0
0
30 Apr 2025
The Robustness of Spiking Neural Networks in Communication and its
  Application towards Network Efficiency in Federated Learning
The Robustness of Spiking Neural Networks in Communication and its Application towards Network Efficiency in Federated Learning
Manh V. Nguyen
Liang Zhao
Bobin Deng
William M. Severa
Honghui Xu
Shaoen Wu
FedML
32
0
0
19 Sep 2024
Sparse Training for Federated Learning with Regularized Error Correction
Sparse Training for Federated Learning with Regularized Error Correction
Ran Greidi
Kobi Cohen
FedML
30
1
0
21 Dec 2023
Federated Neural Radiance Fields
Federated Neural Radiance Fields
Lachlan Holden
Feras Dayoub
D. Harvey
Tat-Jun Chin
FedML
AI4CE
35
4
0
02 May 2023
ResFed: Communication Efficient Federated Learning by Transmitting Deep
  Compressed Residuals
ResFed: Communication Efficient Federated Learning by Transmitting Deep Compressed Residuals
Rui Song
Liguo Zhou
Lingjuan Lyu
Andreas Festag
Alois C. Knoll
FedML
29
5
0
11 Dec 2022
Client Selection for Federated Bayesian Learning
Client Selection for Federated Bayesian Learning
Jiarong Yang
Yuan Liu
Rahif Kassab
FedML
38
11
0
11 Dec 2022
FLamby: Datasets and Benchmarks for Cross-Silo Federated Learning in
  Realistic Healthcare Settings
FLamby: Datasets and Benchmarks for Cross-Silo Federated Learning in Realistic Healthcare Settings
Jean Ogier du Terrail
Samy Ayed
Edwige Cyffers
Felix Grimberg
Chaoyang He
...
Sai Praneeth Karimireddy
Marco Lorenzi
Giovanni Neglia
Marc Tommasi
M. Andreux
FedML
41
142
0
10 Oct 2022
Towards Efficient Communications in Federated Learning: A Contemporary
  Survey
Towards Efficient Communications in Federated Learning: A Contemporary Survey
Zihao Zhao
Yuzhu Mao
Yang Liu
Linqi Song
Ouyang Ye
Xinlei Chen
Wenbo Ding
FedML
54
59
0
02 Aug 2022
Beyond Transmitting Bits: Context, Semantics, and Task-Oriented
  Communications
Beyond Transmitting Bits: Context, Semantics, and Task-Oriented Communications
Deniz Gunduz
Zhijin Qin
Iñaki Estella Aguerri
Harpreet S. Dhillon
Zhaohui Yang
Aylin Yener
Kai‐Kit Wong
C. Chae
27
432
0
19 Jul 2022
Federated learning and next generation wireless communications: A survey
  on bidirectional relationship
Federated learning and next generation wireless communications: A survey on bidirectional relationship
Debaditya Shome
Omer Waqar
Wali Ullah Khan
26
31
0
14 Oct 2021
Communication-Efficient Federated Learning via Predictive Coding
Communication-Efficient Federated Learning via Predictive Coding
Kai Yue
Richeng Jin
Chau-Wai Wong
H. Dai
FedML
25
14
0
02 Aug 2021
Communication Efficiency in Federated Learning: Achievements and
  Challenges
Communication Efficiency in Federated Learning: Achievements and Challenges
Osama Shahid
Seyedamin Pouriyeh
R. Parizi
Quan Z. Sheng
Gautam Srivastava
Liang Zhao
FedML
40
74
0
23 Jul 2021
Reward-Based 1-bit Compressed Federated Distillation on Blockchain
Reward-Based 1-bit Compressed Federated Distillation on Blockchain
Leon Witt
Usama Zafar
KuoYeh Shen
Felix Sattler
Dan Li
Wojciech Samek
FedML
35
4
0
27 Jun 2021
On the Utility of Gradient Compression in Distributed Training Systems
On the Utility of Gradient Compression in Distributed Training Systems
Saurabh Agarwal
Hongyi Wang
Shivaram Venkataraman
Dimitris Papailiopoulos
31
46
0
28 Feb 2021
FedAT: A High-Performance and Communication-Efficient Federated Learning
  System with Asynchronous Tiers
FedAT: A High-Performance and Communication-Efficient Federated Learning System with Asynchronous Tiers
Zheng Chai
Yujing Chen
Ali Anwar
Liang Zhao
Yue Cheng
Huzefa Rangwala
FedML
21
121
0
12 Oct 2020
PSO-PS: Parameter Synchronization with Particle Swarm Optimization for
  Distributed Training of Deep Neural Networks
PSO-PS: Parameter Synchronization with Particle Swarm Optimization for Distributed Training of Deep Neural Networks
Qing Ye
Y. Han
Yanan Sun
Jiancheng Lv
25
3
0
06 Sep 2020
DBS: Dynamic Batch Size For Distributed Deep Neural Network Training
DBS: Dynamic Batch Size For Distributed Deep Neural Network Training
Qing Ye
Yuhao Zhou
Mingjia Shi
Yanan Sun
Jiancheng Lv
19
11
0
23 Jul 2020
Communication Efficient Federated Learning with Energy Awareness over
  Wireless Networks
Communication Efficient Federated Learning with Energy Awareness over Wireless Networks
Richeng Jin
Xiaofan He
H. Dai
36
25
0
15 Apr 2020
Communication optimization strategies for distributed deep neural
  network training: A survey
Communication optimization strategies for distributed deep neural network training: A survey
Shuo Ouyang
Dezun Dong
Yemao Xu
Liquan Xiao
30
12
0
06 Mar 2020
Cooperative Learning via Federated Distillation over Fading Channels
Cooperative Learning via Federated Distillation over Fading Channels
Jinhyun Ahn
Osvaldo Simeone
Joonhyuk Kang
FedML
22
29
0
03 Feb 2020
Communication Efficient Federated Learning over Multiple Access Channels
Communication Efficient Federated Learning over Multiple Access Channels
Wei-Ting Chang
Ravi Tandon
FedML
13
44
0
23 Jan 2020
One-Bit Over-the-Air Aggregation for Communication-Efficient Federated
  Edge Learning: Design and Convergence Analysis
One-Bit Over-the-Air Aggregation for Communication-Efficient Federated Edge Learning: Design and Convergence Analysis
Guangxu Zhu
Yuqing Du
Deniz Gunduz
Kaibin Huang
39
308
0
16 Jan 2020
Clustered Federated Learning: Model-Agnostic Distributed Multi-Task
  Optimization under Privacy Constraints
Clustered Federated Learning: Model-Agnostic Distributed Multi-Task Optimization under Privacy Constraints
Felix Sattler
K. Müller
Wojciech Samek
FedML
48
966
0
04 Oct 2019
DeepCABAC: A Universal Compression Algorithm for Deep Neural Networks
DeepCABAC: A Universal Compression Algorithm for Deep Neural Networks
Simon Wiedemann
H. Kirchhoffer
Stefan Matlage
Paul Haase
Arturo Marbán
...
Ahmed Osman
D. Marpe
H. Schwarz
Thomas Wiegand
Wojciech Samek
49
92
0
27 Jul 2019
Federated Learning over Wireless Fading Channels
Federated Learning over Wireless Fading Channels
M. Amiri
Deniz Gunduz
33
505
0
23 Jul 2019
Accelerating DNN Training in Wireless Federated Edge Learning Systems
Accelerating DNN Training in Wireless Federated Edge Learning Systems
Jinke Ren
Guanding Yu
Guangyao Ding
FedML
26
168
0
23 May 2019
MATCHA: Speeding Up Decentralized SGD via Matching Decomposition
  Sampling
MATCHA: Speeding Up Decentralized SGD via Matching Decomposition Sampling
Jianyu Wang
Anit Kumar Sahu
Zhouyi Yang
Gauri Joshi
S. Kar
29
159
0
23 May 2019
DeepCABAC: Context-adaptive binary arithmetic coding for deep neural
  network compression
DeepCABAC: Context-adaptive binary arithmetic coding for deep neural network compression
Simon Wiedemann
H. Kirchhoffer
Stefan Matlage
Paul Haase
Arturo Marbán
...
Ahmed Osman
D. Marpe
H. Schwarz
Thomas Wiegand
Wojciech Samek
MQ
16
21
0
15 May 2019
Robust and Communication-Efficient Federated Learning from Non-IID Data
Robust and Communication-Efficient Federated Learning from Non-IID Data
Felix Sattler
Simon Wiedemann
K. Müller
Wojciech Samek
FedML
24
1,330
0
07 Mar 2019
A Distributed Synchronous SGD Algorithm with Global Top-$k$
  Sparsification for Low Bandwidth Networks
A Distributed Synchronous SGD Algorithm with Global Top-kkk Sparsification for Low Bandwidth Networks
S. Shi
Qiang-qiang Wang
Kaiyong Zhao
Zhenheng Tang
Yuxin Wang
Xiang Huang
Xiaowen Chu
34
134
0
14 Jan 2019
Entropy-Constrained Training of Deep Neural Networks
Entropy-Constrained Training of Deep Neural Networks
Simon Wiedemann
Arturo Marbán
K. Müller
Wojciech Samek
20
27
0
18 Dec 2018
1