ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1608.06879
  4. Cited By
AIDE: Fast and Communication Efficient Distributed Optimization

AIDE: Fast and Communication Efficient Distributed Optimization

24 August 2016
Sashank J. Reddi
Jakub Konecný
Peter Richtárik
Barnabás Póczós
Alex Smola
ArXivPDFHTML

Papers citing "AIDE: Fast and Communication Efficient Distributed Optimization"

31 / 31 papers shown
Title
Flattened one-bit stochastic gradient descent: compressed distributed
  optimization with controlled variance
Flattened one-bit stochastic gradient descent: compressed distributed optimization with controlled variance
A. Stollenwerk
Laurent Jacques
FedML
23
0
0
17 May 2024
Stochastic Distributed Optimization under Average Second-order
  Similarity: Algorithms and Analysis
Stochastic Distributed Optimization under Average Second-order Similarity: Algorithms and Analysis
Dachao Lin
Yuze Han
Haishan Ye
Zhihua Zhang
22
11
0
15 Apr 2023
FedSSO: A Federated Server-Side Second-Order Optimization Algorithm
FedSSO: A Federated Server-Side Second-Order Optimization Algorithm
Xinteng Ma
Renyi Bao
Jinpeng Jiang
Yang Liu
Arthur Jiang
Junhua Yan
Xin Liu
Zhisong Pan
FedML
32
6
0
20 Jun 2022
SHED: A Newton-type algorithm for federated learning based on
  incremental Hessian eigenvector sharing
SHED: A Newton-type algorithm for federated learning based on incremental Hessian eigenvector sharing
Nicolò Dal Fabbro
S. Dey
M. Rossi
Luca Schenato
FedML
29
14
0
11 Feb 2022
Linear Speedup in Personalized Collaborative Learning
Linear Speedup in Personalized Collaborative Learning
El Mahdi Chayti
Sai Praneeth Karimireddy
Sebastian U. Stich
Nicolas Flammarion
Martin Jaggi
FedML
15
13
0
10 Nov 2021
Basis Matters: Better Communication-Efficient Second Order Methods for
  Federated Learning
Basis Matters: Better Communication-Efficient Second Order Methods for Federated Learning
Xun Qian
Rustem Islamov
M. Safaryan
Peter Richtárik
FedML
24
23
0
02 Nov 2021
Acceleration in Distributed Optimization under Similarity
Acceleration in Distributed Optimization under Similarity
Helena Lofstrom
G. Scutari
Tianyue Cao
Alexander Gasnikov
21
26
0
24 Oct 2021
A Stochastic Newton Algorithm for Distributed Convex Optimization
A Stochastic Newton Algorithm for Distributed Convex Optimization
Brian Bullins
Kumar Kshitij Patel
Ohad Shamir
Nathan Srebro
Blake E. Woodworth
28
15
0
07 Oct 2021
Communication Efficiency in Federated Learning: Achievements and
  Challenges
Communication Efficiency in Federated Learning: Achievements and Challenges
Osama Shahid
Seyedamin Pouriyeh
R. Parizi
Quan Z. Sheng
Gautam Srivastava
Liang Zhao
FedML
40
74
0
23 Jul 2021
FedNL: Making Newton-Type Methods Applicable to Federated Learning
FedNL: Making Newton-Type Methods Applicable to Federated Learning
M. Safaryan
Rustem Islamov
Xun Qian
Peter Richtárik
FedML
33
77
0
05 Jun 2021
Distributed Second Order Methods with Fast Rates and Compressed
  Communication
Distributed Second Order Methods with Fast Rates and Compressed Communication
Rustem Islamov
Xun Qian
Peter Richtárik
32
51
0
14 Feb 2021
Newton Method over Networks is Fast up to the Statistical Precision
Newton Method over Networks is Fast up to the Statistical Precision
Amir Daneshmand
G. Scutari
Pavel Dvurechensky
Alexander Gasnikov
24
21
0
12 Feb 2021
DONE: Distributed Approximate Newton-type Method for Federated Edge
  Learning
DONE: Distributed Approximate Newton-type Method for Federated Edge Learning
Canh T. Dinh
N. H. Tran
Tuan Dung Nguyen
Wei Bao
A. R. Balef
B. Zhou
Albert Y. Zomaya
FedML
20
15
0
10 Dec 2020
Toward Multiple Federated Learning Services Resource Sharing in Mobile
  Edge Networks
Toward Multiple Federated Learning Services Resource Sharing in Mobile Edge Networks
Minh N. H. Nguyen
N. H. Tran
Y. Tun
Zhu Han
Choong Seon Hong
FedML
21
49
0
25 Nov 2020
Sparse sketches with small inversion bias
Sparse sketches with small inversion bias
Michal Derezinski
Zhenyu Liao
Yan Sun
Michael W. Mahoney
23
21
0
21 Nov 2020
Local SGD: Unified Theory and New Efficient Methods
Local SGD: Unified Theory and New Efficient Methods
Eduard A. Gorbunov
Filip Hanzely
Peter Richtárik
FedML
35
109
0
03 Nov 2020
Optimization for Supervised Machine Learning: Randomized Algorithms for
  Data and Parameters
Optimization for Supervised Machine Learning: Randomized Algorithms for Data and Parameters
Filip Hanzely
32
0
0
26 Aug 2020
Fast-Convergent Federated Learning
Fast-Convergent Federated Learning
Hung T. Nguyen
Vikash Sehwag
Seyyedali Hosseinalipour
Christopher G. Brinton
M. Chiang
H. Vincent Poor
FedML
26
192
0
26 Jul 2020
COKE: Communication-Censored Decentralized Kernel Learning
COKE: Communication-Censored Decentralized Kernel Learning
Ping Xu
Yue Wang
Xiang Chen
Z. Tian
15
20
0
28 Jan 2020
FedDANE: A Federated Newton-Type Method
FedDANE: A Federated Newton-Type Method
Tian Li
Anit Kumar Sahu
Manzil Zaheer
Maziar Sanjabi
Ameet Talwalkar
Virginia Smith
FedML
18
155
0
07 Jan 2020
Communication-Efficient Local Decentralized SGD Methods
Communication-Efficient Local Decentralized SGD Methods
Xiang Li
Wenhao Yang
Shusen Wang
Zhihua Zhang
30
53
0
21 Oct 2019
Overcoming Forgetting in Federated Learning on Non-IID Data
Overcoming Forgetting in Federated Learning on Non-IID Data
N. Shoham
Tomer Avidor
Aviv Keren
Nadav Tal-Israel
Daniel Benditkis
Liron Mor Yosef
Itai Zeitak
CLL
FedML
23
217
0
17 Oct 2019
On the Convergence of FedAvg on Non-IID Data
On the Convergence of FedAvg on Non-IID Data
Xiang Li
Kaixuan Huang
Wenhao Yang
Shusen Wang
Zhihua Zhang
FedML
64
2,283
0
04 Jul 2019
Natural Compression for Distributed Deep Learning
Natural Compression for Distributed Deep Learning
Samuel Horváth
Chen-Yu Ho
L. Horvath
Atal Narayan Sahu
Marco Canini
Peter Richtárik
21
150
0
27 May 2019
A Distributed Second-Order Algorithm You Can Trust
A Distributed Second-Order Algorithm You Can Trust
Celestine Mendler-Dünner
Aurelien Lucchi
Matilde Gargiani
An Bian
Thomas Hofmann
Martin Jaggi
26
32
0
20 Jun 2018
Gradient Sparsification for Communication-Efficient Distributed
  Optimization
Gradient Sparsification for Communication-Efficient Distributed Optimization
Jianqiao Wangni
Jialei Wang
Ji Liu
Tong Zhang
15
522
0
26 Oct 2017
Stochastic Nonconvex Optimization with Large Minibatches
Stochastic Nonconvex Optimization with Large Minibatches
Weiran Wang
Nathan Srebro
36
26
0
25 Sep 2017
GIANT: Globally Improved Approximate Newton Method for Distributed
  Optimization
GIANT: Globally Improved Approximate Newton Method for Distributed Optimization
Shusen Wang
Farbod Roosta-Khorasani
Peng Xu
Michael W. Mahoney
30
127
0
11 Sep 2017
Memory and Communication Efficient Distributed Stochastic Optimization
  with Minibatch-Prox
Memory and Communication Efficient Distributed Stochastic Optimization with Minibatch-Prox
Jialei Wang
Weiran Wang
Nathan Srebro
16
54
0
21 Feb 2017
Federated Learning: Strategies for Improving Communication Efficiency
Federated Learning: Strategies for Improving Communication Efficiency
Jakub Konecný
H. B. McMahan
Felix X. Yu
Peter Richtárik
A. Suresh
Dave Bacon
FedML
34
4,588
0
18 Oct 2016
Federated Optimization: Distributed Machine Learning for On-Device
  Intelligence
Federated Optimization: Distributed Machine Learning for On-Device Intelligence
Jakub Konecný
H. B. McMahan
Daniel Ramage
Peter Richtárik
FedML
48
1,876
0
08 Oct 2016
1