ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.05516
  4. Cited By
Federated Learning of a Mixture of Global and Local Models

Federated Learning of a Mixture of Global and Local Models

10 February 2020
Filip Hanzely
Peter Richtárik
    FedML
ArXivPDFHTML

Papers citing "Federated Learning of a Mixture of Global and Local Models"

23 / 23 papers shown
Title
Lazy But Effective: Collaborative Personalized Federated Learning with Heterogeneous Data
Lazy But Effective: Collaborative Personalized Federated Learning with Heterogeneous Data
Ljubomir Rokvic
Panayiotis Danassis
Boi Faltings
FedML
124
0
0
05 May 2025
dFLMoE: Decentralized Federated Learning via Mixture of Experts for Medical Data Analysis
dFLMoE: Decentralized Federated Learning via Mixture of Experts for Medical Data Analysis
Luyuan Xie
Tianyu Luan
Wenyuan Cai
Guochen Yan
Zhaoyu Chen
Nan Xi
Yuejian Fang
Qingni Shen
Zhonghai Wu
Junsong Yuan
FedML
216
0
0
13 Mar 2025
PeFLL: Personalized Federated Learning by Learning to Learn
PeFLL: Personalized Federated Learning by Learning to Learn
Jonathan Scott
Hossein Zakerinia
Christoph H. Lampert
FedML
179
10
0
17 Jan 2025
pFedGPA: Diffusion-based Generative Parameter Aggregation for Personalized Federated Learning
pFedGPA: Diffusion-based Generative Parameter Aggregation for Personalized Federated Learning
Jiahao Lai
Jiaqiang Li
Jian Xu
Yanru Wu
Boshi Tang
Siqi Chen
Yongfeng Huang
Wenbo Ding
Yang Li
FedML
127
0
0
09 Sep 2024
Federated Learning over Connected Modes
Federated Learning over Connected Modes
Dennis Grinwald
Philipp Wiesner
Shinichi Nakajima
FedML
128
0
0
05 Mar 2024
ADEPT: Hierarchical Bayes Approach to Personalized Federated Unsupervised Learning
ADEPT: Hierarchical Bayes Approach to Personalized Federated Unsupervised Learning
Kaan Ozkara
Bruce Huang
Ruida Zhou
Suhas Diggavi
158
0
0
19 Feb 2024
Variance Reduced Local SGD with Lower Communication Complexity
Variance Reduced Local SGD with Lower Communication Complexity
Xian-Feng Liang
Shuheng Shen
Jingchang Liu
Zhen Pan
Enhong Chen
Yifei Cheng
FedML
78
153
0
30 Dec 2019
Federated Variance-Reduced Stochastic Gradient Descent with Robustness
  to Byzantine Attacks
Federated Variance-Reduced Stochastic Gradient Descent with Robustness to Byzantine Attacks
Zhaoxian Wu
Qing Ling
Tianyi Chen
G. Giannakis
FedML
AAML
70
183
0
29 Dec 2019
Advances and Open Problems in Federated Learning
Advances and Open Problems in Federated Learning
Peter Kairouz
H. B. McMahan
Brendan Avent
A. Bellet
M. Bennis
...
Zheng Xu
Qiang Yang
Felix X. Yu
Han Yu
Sen Zhao
FedML
AI4CE
185
6,229
0
10 Dec 2019
SCAFFOLD: Stochastic Controlled Averaging for Federated Learning
SCAFFOLD: Stochastic Controlled Averaging for Federated Learning
Sai Praneeth Karimireddy
Satyen Kale
M. Mohri
Sashank J. Reddi
Sebastian U. Stich
A. Suresh
FedML
65
346
0
14 Oct 2019
Tighter Theory for Local SGD on Identical and Heterogeneous Data
Tighter Theory for Local SGD on Identical and Heterogeneous Data
Ahmed Khaled
Konstantin Mishchenko
Peter Richtárik
68
433
0
10 Sep 2019
First Analysis of Local GD on Heterogeneous Data
First Analysis of Local GD on Heterogeneous Data
Ahmed Khaled
Konstantin Mishchenko
Peter Richtárik
FedML
60
172
0
10 Sep 2019
One Method to Rule Them All: Variance Reduction for Data, Parameters and
  Many New Methods
One Method to Rule Them All: Variance Reduction for Data, Parameters and Many New Methods
Filip Hanzely
Peter Richtárik
73
26
0
27 May 2019
A Unified Theory of SGD: Variance Reduction, Sampling, Quantization and
  Coordinate Descent
A Unified Theory of SGD: Variance Reduction, Sampling, Quantization and Coordinate Descent
Eduard A. Gorbunov
Filip Hanzely
Peter Richtárik
96
146
0
27 May 2019
Don't Jump Through Hoops and Remove Those Loops: SVRG and Katyusha are
  Better Without the Outer Loop
Don't Jump Through Hoops and Remove Those Loops: SVRG and Katyusha are Better Without the Outer Loop
D. Kovalev
Samuel Horváth
Peter Richtárik
77
156
0
24 Jan 2019
Local SGD Converges Fast and Communicates Little
Local SGD Converges Fast and Communicates Little
Sebastian U. Stich
FedML
164
1,061
0
24 May 2018
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
803
11,866
0
09 Mar 2017
Federated Learning: Strategies for Improving Communication Efficiency
Federated Learning: Strategies for Improving Communication Efficiency
Jakub Konecný
H. B. McMahan
Felix X. Yu
Peter Richtárik
A. Suresh
Dave Bacon
FedML
286
4,636
0
18 Oct 2016
Federated Optimization: Distributed Machine Learning for On-Device
  Intelligence
Federated Optimization: Distributed Machine Learning for On-Device Intelligence
Jakub Konecný
H. B. McMahan
Daniel Ramage
Peter Richtárik
FedML
124
1,895
0
08 Oct 2016
AIDE: Fast and Communication Efficient Distributed Optimization
AIDE: Fast and Communication Efficient Distributed Optimization
Sashank J. Reddi
Jakub Konecný
Peter Richtárik
Barnabás Póczós
Alex Smola
53
151
0
24 Aug 2016
SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly
  Convex Composite Objectives
SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives
Aaron Defazio
Francis R. Bach
Simon Lacoste-Julien
ODL
128
1,823
0
01 Jul 2014
A Proximal Stochastic Gradient Method with Progressive Variance
  Reduction
A Proximal Stochastic Gradient Method with Progressive Variance Reduction
Lin Xiao
Tong Zhang
ODL
147
738
0
19 Mar 2014
Parallel Coordinate Descent Methods for Big Data Optimization
Parallel Coordinate Descent Methods for Big Data Optimization
Peter Richtárik
Martin Takáč
111
487
0
04 Dec 2012
1