ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2111.00465
  4. Cited By
DAdaQuant: Doubly-adaptive quantization for communication-efficient
  Federated Learning

DAdaQuant: Doubly-adaptive quantization for communication-efficient Federated Learning

31 October 2021
Robert Hönig
Yiren Zhao
Robert D. Mullins
    FedML
ArXiv (abs)PDFHTML

Papers citing "DAdaQuant: Doubly-adaptive quantization for communication-efficient Federated Learning"

28 / 28 papers shown
Title
Sparsification Under Siege: Defending Against Poisoning Attacks in Communication-Efficient Federated Learning
Sparsification Under Siege: Defending Against Poisoning Attacks in Communication-Efficient Federated Learning
Zhiyong Jin
Runhua Xu
Chong Li
Yunxing Liu
Jianxin Li
AAMLFedML
121
0
0
30 Apr 2025
Accelerating Energy-Efficient Federated Learning in Cell-Free Networks with Adaptive Quantization
Accelerating Energy-Efficient Federated Learning in Cell-Free Networks with Adaptive Quantization
Afsaneh Mahmoudi
Ming Xiao
Emil Björnson
98
0
0
31 Dec 2024
Achieving Dimension-Free Communication in Federated Learning via Zeroth-Order Optimization
Achieving Dimension-Free Communication in Federated Learning via Zeroth-Order Optimization
Zhe Li
Bicheng Ying
Zidong Liu
Chaosheng Dong
Haibo Yang
FedML
120
3
0
24 May 2024
SADDLe: Sharpness-Aware Decentralized Deep Learning with Heterogeneous Data
SADDLe: Sharpness-Aware Decentralized Deep Learning with Heterogeneous Data
Sakshi Choudhary
Sai Aparna Aketi
Kaushik Roy
FedML
87
0
0
22 May 2024
DP-DyLoRA: Fine-Tuning Transformer-Based Models On-Device under Differentially Private Federated Learning using Dynamic Low-Rank Adaptation
DP-DyLoRA: Fine-Tuning Transformer-Based Models On-Device under Differentially Private Federated Learning using Dynamic Low-Rank Adaptation
Jie Xu
Karthikeyan P. Saravanan
Rogier van Dalen
Haaris Mehmood
David Tuckey
Mete Ozay
154
8
0
10 May 2024
Adaptive Quantization of Model Updates for Communication-Efficient
  Federated Learning
Adaptive Quantization of Model Updates for Communication-Efficient Federated Learning
Divyansh Jhunjhunwala
Advait Gadhikar
Gauri Joshi
Yonina C. Eldar
FedMLMQ
57
110
0
08 Feb 2021
FEDZIP: A Compression Framework for Communication-Efficient Federated
  Learning
FEDZIP: A Compression Framework for Communication-Efficient Federated Learning
Amirhossein Malekijoo
Mohammad Javad Fadaeieslam
Hanieh Malekijou
Morteza Homayounfar
F. Alizadeh-Shabdiz
Reza Rawassizadeh
FedML
71
55
0
02 Feb 2021
FracTrain: Fractionally Squeezing Bit Savings Both Temporally and Spatially for Efficient DNN Training
FracTrain: Fractionally Squeezing Bit Savings Both Temporally and Spatially for Efficient DNN Training
Y. Fu
Haoran You
Yang Zhao
Yue Wang
Chaojian Li
K. Gopalakrishnan
Zhangyang Wang
Yingyan Lin
MQ
66
32
0
24 Dec 2020
Adaptive Federated Dropout: Improving Communication Efficiency and
  Generalization for Federated Learning
Adaptive Federated Dropout: Improving Communication Efficiency and Generalization for Federated Learning
Nader Bouacida
Jiahui Hou
H. Zang
Xin Liu
FedML
83
77
0
08 Nov 2020
Can Federated Learning Save The Planet?
Can Federated Learning Save The Planet?
Xinchi Qiu
Titouan Parcollet
Daniel J. Beutel
Taner Topal
Akhil Mathur
Nicholas D. Lane
57
81
0
13 Oct 2020
Flower: A Friendly Federated Learning Research Framework
Flower: A Friendly Federated Learning Research Framework
Daniel J. Beutel
Taner Topal
Akhil Mathur
Xinchi Qiu
Javier Fernandez-Marques
...
Lorenzo Sani
Kwing Hei Li
Titouan Parcollet
Pedro Porto Buarque de Gusmão
Nicholas D. Lane
FedML
138
815
0
28 Jul 2020
Fast-Convergent Federated Learning
Fast-Convergent Federated Learning
Hung T. Nguyen
Vikash Sehwag
Seyyedali Hosseinalipour
Christopher G. Brinton
M. Chiang
H. Vincent Poor
FedML
74
195
0
26 Jul 2020
FetchSGD: Communication-Efficient Federated Learning with Sketching
FetchSGD: Communication-Efficient Federated Learning with Sketching
D. Rothchild
Ashwinee Panda
Enayat Ullah
Nikita Ivkin
Ion Stoica
Vladimir Braverman
Joseph E. Gonzalez
Raman Arora
FedML
79
370
0
15 Jul 2020
Federated Learning With Quantized Global Model Updates
Federated Learning With Quantized Global Model Updates
M. Amiri
Deniz Gunduz
Sanjeev R. Kulkarni
H. Vincent Poor
FedML
92
132
0
18 Jun 2020
UVeQFed: Universal Vector Quantization for Federated Learning
UVeQFed: Universal Vector Quantization for Federated Learning
Nir Shlezinger
Mingzhe Chen
Yonina C. Eldar
H. Vincent Poor
Shuguang Cui
FedMLMQ
54
229
0
05 Jun 2020
Language Models are Few-Shot Learners
Language Models are Few-Shot Learners
Tom B. Brown
Benjamin Mann
Nick Ryder
Melanie Subbiah
Jared Kaplan
...
Christopher Berner
Sam McCandlish
Alec Radford
Ilya Sutskever
Dario Amodei
BDL
877
42,379
0
28 May 2020
Fractional Skipping: Towards Finer-Grained Dynamic CNN Inference
Fractional Skipping: Towards Finer-Grained Dynamic CNN Inference
Jianghao Shen
Y. Fu
Yue Wang
Pengfei Xu
Zhangyang Wang
Yingyan Lin
MQ
52
44
0
03 Jan 2020
PyTorch: An Imperative Style, High-Performance Deep Learning Library
PyTorch: An Imperative Style, High-Performance Deep Learning Library
Adam Paszke
Sam Gross
Francisco Massa
Adam Lerer
James Bradbury
...
Sasank Chilamkurthy
Benoit Steiner
Lu Fang
Junjie Bai
Soumith Chintala
ODL
544
42,591
0
03 Dec 2019
FedPAQ: A Communication-Efficient Federated Learning Method with
  Periodic Averaging and Quantization
FedPAQ: A Communication-Efficient Federated Learning Method with Periodic Averaging and Quantization
Amirhossein Reisizadeh
Aryan Mokhtari
Hamed Hassani
Ali Jadbabaie
Ramtin Pedarsani
FedML
258
774
0
28 Sep 2019
Model Pruning Enables Efficient Federated Learning on Edge Devices
Model Pruning Enables Efficient Federated Learning on Edge Devices
Yuang Jiang
Shiqiang Wang
Victor Valls
Bongjun Ko
Wei-Han Lee
Kin K. Leung
Leandros Tassiulas
92
463
0
26 Sep 2019
Training Deep Neural Networks with 8-bit Floating Point Numbers
Training Deep Neural Networks with 8-bit Floating Point Numbers
Naigang Wang
Jungwook Choi
D. Brand
Chia-Yu Chen
K. Gopalakrishnan
MQ
65
503
0
19 Dec 2018
Federated Optimization in Heterogeneous Networks
Federated Optimization in Heterogeneous Networks
Tian Li
Anit Kumar Sahu
Manzil Zaheer
Maziar Sanjabi
Ameet Talwalkar
Virginia Smith
FedML
190
5,220
0
14 Dec 2018
LEAF: A Benchmark for Federated Settings
LEAF: A Benchmark for Federated Settings
S. Caldas
Sai Meher Karthik Duddu
Peter Wu
Tian Li
Jakub Konecný
H. B. McMahan
Virginia Smith
Ameet Talwalkar
FedML
158
1,422
0
03 Dec 2018
Scalable Methods for 8-bit Training of Neural Networks
Scalable Methods for 8-bit Training of Neural Networks
Ron Banner
Itay Hubara
Elad Hoffer
Daniel Soudry
MQ
84
339
0
25 May 2018
DeepMood: Modeling Mobile Phone Typing Dynamics for Mood Detection
DeepMood: Modeling Mobile Phone Typing Dynamics for Mood Detection
Bokai Cao
Lei Zheng
Chenwei Zhang
Philip S. Yu
A. Piscitello
John Zulueta
Olusola Ajilore
K. Ryan
Alex Leow
60
125
0
23 Mar 2018
Training Quantized Nets: A Deeper Understanding
Training Quantized Nets: A Deeper Understanding
Hao Li
Soham De
Zheng Xu
Christoph Studer
H. Samet
Tom Goldstein
MQ
55
211
0
07 Jun 2017
Communication-Efficient Learning of Deep Networks from Decentralized
  Data
Communication-Efficient Learning of Deep Networks from Decentralized Data
H. B. McMahan
Eider Moore
Daniel Ramage
S. Hampson
Blaise Agüera y Arcas
FedML
408
17,593
0
17 Feb 2016
Deep Learning with Limited Numerical Precision
Deep Learning with Limited Numerical Precision
Suyog Gupta
A. Agrawal
K. Gopalakrishnan
P. Narayanan
HAI
207
2,049
0
09 Feb 2015
1