ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2210.00611
  4. Cited By
SAGDA: Achieving $\mathcal{O}(ε^{-2})$ Communication Complexity
  in Federated Min-Max Learning

SAGDA: Achieving O(ε−2)\mathcal{O}(ε^{-2})O(ε−2) Communication Complexity in Federated Min-Max Learning

2 October 2022
Haibo Yang
Zhuqing Liu
Xin Zhang
Jia-Wei Liu
    FedML
ArXivPDFHTML

Papers citing "SAGDA: Achieving $\mathcal{O}(ε^{-2})$ Communication Complexity in Federated Min-Max Learning"

17 / 17 papers shown
Title
Anarchic Federated Learning
Anarchic Federated Learning
Haibo Yang
Xin Zhang
Prashant Khanduri
Jia Liu
FedML
36
58
0
23 Aug 2021
STEM: A Stochastic Two-Sided Momentum Algorithm Achieving Near-Optimal
  Sample and Communication Complexities for Federated Learning
STEM: A Stochastic Two-Sided Momentum Algorithm Achieving Near-Optimal Sample and Communication Complexities for Federated Learning
Prashant Khanduri
Pranay Sharma
Haibo Yang
Min-Fong Hong
Jia Liu
K. Rajawat
P. Varshney
FedML
50
63
0
19 Jun 2021
Local Stochastic Gradient Descent Ascent: Convergence Analysis and
  Communication Efficiency
Local Stochastic Gradient Descent Ascent: Convergence Analysis and Communication Efficiency
Yuyang Deng
M. Mahdavi
82
61
0
25 Feb 2021
Achieving Linear Speedup with Partial Worker Participation in Non-IID
  Federated Learning
Achieving Linear Speedup with Partial Worker Participation in Non-IID Federated Learning
Haibo Yang
Minghong Fang
Jia Liu
FedML
62
258
0
27 Jan 2021
Robust Federated Learning: The Case of Affine Distribution Shifts
Robust Federated Learning: The Case of Affine Distribution Shifts
Amirhossein Reisizadeh
Farzan Farnia
Ramtin Pedarsani
Ali Jadbabaie
FedML
OOD
75
164
0
16 Jun 2020
FedGAN: Federated Generative Adversarial Networks for Distributed Data
FedGAN: Federated Generative Adversarial Networks for Distributed Data
M. Rasouli
Tao Sun
Ram Rajagopal
FedML
86
145
0
12 Jun 2020
Advances and Open Problems in Federated Learning
Advances and Open Problems in Federated Learning
Peter Kairouz
H. B. McMahan
Brendan Avent
A. Bellet
M. Bennis
...
Zheng Xu
Qiang Yang
Felix X. Yu
Han Yu
Sen Zhao
FedML
AI4CE
194
6,229
0
10 Dec 2019
SCAFFOLD: Stochastic Controlled Averaging for Federated Learning
SCAFFOLD: Stochastic Controlled Averaging for Federated Learning
Sai Praneeth Karimireddy
Satyen Kale
M. Mohri
Sashank J. Reddi
Sebastian U. Stich
A. Suresh
FedML
65
346
0
14 Oct 2019
On Gradient Descent Ascent for Nonconvex-Concave Minimax Problems
On Gradient Descent Ascent for Nonconvex-Concave Minimax Problems
Tianyi Lin
Chi Jin
Michael I. Jordan
110
507
0
02 Jun 2019
Robust and Communication-Efficient Federated Learning from Non-IID Data
Robust and Communication-Efficient Federated Learning from Non-IID Data
Felix Sattler
Simon Wiedemann
K. Müller
Wojciech Samek
FedML
61
1,353
0
07 Mar 2019
Weakly-Convex Concave Min-Max Optimization: Provable Algorithms and
  Applications in Machine Learning
Weakly-Convex Concave Min-Max Optimization: Provable Algorithms and Applications in Machine Learning
Hassan Rafique
Mingrui Liu
Qihang Lin
Tianbao Yang
44
109
0
04 Oct 2018
Cooperative SGD: A unified Framework for the Design and Analysis of
  Communication-Efficient SGD Algorithms
Cooperative SGD: A unified Framework for the Design and Analysis of Communication-Efficient SGD Algorithms
Jianyu Wang
Gauri Joshi
143
349
0
22 Aug 2018
Don't Use Large Mini-Batches, Use Local SGD
Don't Use Large Mini-Batches, Use Local SGD
Tao R. Lin
Sebastian U. Stich
Kumar Kshitij Patel
Martin Jaggi
111
433
0
22 Aug 2018
Parallel Restarted SGD with Faster Convergence and Less Communication:
  Demystifying Why Model Averaging Works for Deep Learning
Parallel Restarted SGD with Faster Convergence and Less Communication: Demystifying Why Model Averaging Works for Deep Learning
Hao Yu
Sen Yang
Shenghuo Zhu
MoMe
FedML
73
604
0
17 Jul 2018
Local SGD Converges Fast and Communicates Little
Local SGD Converges Fast and Communicates Little
Sebastian U. Stich
FedML
164
1,061
0
24 May 2018
On the convergence properties of a $K$-step averaging stochastic
  gradient descent algorithm for nonconvex optimization
On the convergence properties of a KKK-step averaging stochastic gradient descent algorithm for nonconvex optimization
Fan Zhou
Guojing Cong
121
234
0
03 Aug 2017
Towards Deep Learning Models Resistant to Adversarial Attacks
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILM
OOD
271
12,029
0
19 Jun 2017
1