ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.01132
  4. Cited By
A Communication-efficient Algorithm with Linear Convergence for
  Federated Minimax Learning

A Communication-efficient Algorithm with Linear Convergence for Federated Minimax Learning

2 June 2022
Zhenyu Sun
Ermin Wei
    FedML
ArXivPDFHTML

Papers citing "A Communication-efficient Algorithm with Linear Convergence for Federated Minimax Learning"

4 / 4 papers shown
Title
Robust Decentralized Learning with Local Updates and Gradient Tracking
Robust Decentralized Learning with Local Updates and Gradient Tracking
Sajjad Ghiasvand
Amirhossein Reisizadeh
Mahnoosh Alizadeh
Ramtin Pedarsani
39
3
0
02 May 2024
Communication-Efficient Gradient Descent-Accent Methods for Distributed
  Variational Inequalities: Unified Analysis and Local Updates
Communication-Efficient Gradient Descent-Accent Methods for Distributed Variational Inequalities: Unified Analysis and Local Updates
Siqi Zhang
S. Choudhury
Sebastian U. Stich
Nicolas Loizou
FedML
19
3
0
08 Jun 2023
Emerging Trends in Federated Learning: From Model Fusion to Federated X
  Learning
Emerging Trends in Federated Learning: From Model Fusion to Federated X Learning
Shaoxiong Ji
Yue Tan
Teemu Saravirta
Zhiqin Yang
Yixin Liu
Lauri Vasankari
Shirui Pan
Guodong Long
A. Walid
FedML
37
76
0
25 Feb 2021
Linear Convergence in Federated Learning: Tackling Client Heterogeneity
  and Sparse Gradients
Linear Convergence in Federated Learning: Tackling Client Heterogeneity and Sparse Gradients
A. Mitra
Rayana H. Jaafar
George J. Pappas
Hamed Hassani
FedML
55
157
0
14 Feb 2021
1