Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2211.01883
Cited By
Faster Adaptive Momentum-Based Federated Methods for Distributed Composition Optimization
3 November 2022
Feihu Huang
FedML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Faster Adaptive Momentum-Based Federated Methods for Distributed Composition Optimization"
38 / 38 papers shown
Title
Stochastic Controlled Averaging for Federated Learning with Communication Compression
Xinmeng Huang
Ping Li
Xiaoyun Li
93
207
0
16 Aug 2023
FedNest: Federated Bilevel, Minimax, and Compositional Optimization
Davoud Ataee Tarzanagh
Mingchen Li
Christos Thrampoulidis
Samet Oymak
FedML
79
74
0
04 May 2022
Optimal Algorithms for Stochastic Multi-Level Compositional Optimization
Wei Jiang
Bokun Wang
Yibo Wang
Lijun Zhang
Tianbao Yang
122
18
0
15 Feb 2022
Toward Communication Efficient Adaptive Gradient Method
Xiangyi Chen
Xiaoyun Li
P. Li
FedML
59
42
0
10 Sep 2021
Compositional federated learning: Applications in distributionally robust averaging and meta learning
Feihu Huang
Junyi Li
FedML
64
15
0
21 Jun 2021
STEM: A Stochastic Two-Sided Momentum Algorithm Achieving Near-Optimal Sample and Communication Complexities for Federated Learning
Prashant Khanduri
Pranay Sharma
Haibo Yang
Min-Fong Hong
Jia Liu
K. Rajawat
P. Varshney
FedML
50
63
0
19 Jun 2021
SUPER-ADAM: Faster and Universal Framework of Adaptive Gradients
Feihu Huang
Junyi Li
Heng-Chiao Huang
ODL
50
42
0
15 Jun 2021
Local Stochastic Gradient Descent Ascent: Convergence Analysis and Communication Efficiency
Yuyang Deng
M. Mahdavi
86
61
0
25 Feb 2021
Distributionally Robust Federated Averaging
Yuyang Deng
Mohammad Mahdi Kamani
M. Mahdavi
FedML
45
142
0
25 Feb 2021
FLOP: Federated Learning on Medical Datasets using Partial Networks
Qiang Yang
Jianyi Zhang
Weituo Hao
Gregory P. Spell
Lawrence Carin
FedML
OOD
53
87
0
10 Feb 2021
FedCluster: Boosting the Convergence of Federated Learning via Cluster-Cycling
Cheng Chen
Ziyi Chen
Yi Zhou
B. Kailkhura
FedML
61
61
0
22 Sep 2020
Solving Stochastic Compositional Optimization is Nearly as Easy as Solving Stochastic Optimization
Tianyi Chen
Yuejiao Sun
W. Yin
103
82
0
25 Aug 2020
Federated Accelerated Stochastic Gradient Descent
Honglin Yuan
Tengyu Ma
FedML
64
180
0
16 Jun 2020
Robust Federated Learning: The Case of Affine Distribution Shifts
Amirhossein Reisizadeh
Farzan Farnia
Ramtin Pedarsani
Ali Jadbabaie
FedML
OOD
82
164
0
16 Jun 2020
Adaptive Personalized Federated Learning
Yuyang Deng
Mohammad Mahdi Kamani
M. Mahdavi
FedML
301
556
0
30 Mar 2020
Adaptive Federated Optimization
Sashank J. Reddi
Zachary B. Charles
Manzil Zaheer
Zachary Garrett
Keith Rush
Jakub Konecný
Sanjiv Kumar
H. B. McMahan
FedML
177
1,437
0
29 Feb 2020
Personalized Federated Learning: A Meta-Learning Approach
Alireza Fallah
Aryan Mokhtari
Asuman Ozdaglar
FedML
154
569
0
19 Feb 2020
Compositional ADAM: An Adaptive Compositional Solver
Rasul Tutunov
Minne Li
Alexander I. Cowen-Rivers
Jun Wang
Haitham Bou-Ammar
ODL
89
16
0
10 Feb 2020
Advances and Open Problems in Federated Learning
Peter Kairouz
H. B. McMahan
Brendan Avent
A. Bellet
M. Bennis
...
Zheng Xu
Qiang Yang
Felix X. Yu
Han Yu
Sen Zhao
FedML
AI4CE
256
6,261
0
10 Dec 2019
Multi-Level Composite Stochastic Optimization via Nested Variance Reduction
Junyu Zhang
Lin Xiao
109
56
0
29 Aug 2019
A Hybrid Stochastic Optimization Framework for Stochastic Composite Nonconvex Optimization
Quoc Tran-Dinh
Nhan H. Pham
T. Dzung
Lam M. Nguyen
60
50
0
08 Jul 2019
On the Convergence of FedAvg on Non-IID Data
Xiang Li
Kaixuan Huang
Wenhao Yang
Shusen Wang
Zhihua Zhang
FedML
140
2,334
0
04 Jul 2019
Momentum-Based Variance Reduction in Non-Convex SGD
Ashok Cutkosky
Francesco Orabona
ODL
84
407
0
24 May 2019
On the Convergence of Adam and Beyond
Sashank J. Reddi
Satyen Kale
Surinder Kumar
93
2,499
0
19 Apr 2019
Agnostic Federated Learning
M. Mohri
Gary Sivek
A. Suresh
FedML
129
935
0
01 Feb 2019
Error Feedback Fixes SignSGD and other Gradient Compression Schemes
Sai Praneeth Karimireddy
Quentin Rebjock
Sebastian U. Stich
Martin Jaggi
66
502
0
28 Jan 2019
On the Convergence of A Class of Adam-Type Algorithms for Non-Convex Optimization
Xiangyi Chen
Sijia Liu
Ruoyu Sun
Mingyi Hong
55
323
0
08 Aug 2018
SPIDER: Near-Optimal Non-Convex Optimization via Stochastic Path Integrated Differential Estimator
Cong Fang
C. J. Li
Zhouchen Lin
Tong Zhang
89
579
0
04 Jul 2018
Improved Sample Complexity for Stochastic Compositional Variance Reduced Gradient
Tianyi Lin
Chenyou Fan
Mengdi Wang
Michael I. Jordan
56
24
0
01 Jun 2018
Local SGD Converges Fast and Communicates Little
Sebastian U. Stich
FedML
170
1,063
0
24 May 2018
Accelerated Method for Stochastic Composition Optimization with Nonsmooth Regularization
Zhouyuan Huo
Bin Gu
Ji Liu
Heng-Chiao Huang
87
51
0
10 Nov 2017
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
823
11,909
0
09 Mar 2017
SARAH: A Novel Method for Machine Learning Problems Using Stochastic Recursive Gradient
Lam M. Nguyen
Jie Liu
K. Scheinberg
Martin Takáč
ODL
162
606
0
01 Mar 2017
Accelerating Stochastic Composition Optimization
Mengdi Wang
Ji Liu
Ethan X. Fang
56
148
0
25 Jul 2016
Learning to learn by gradient descent by gradient descent
Marcin Andrychowicz
Misha Denil
Sergio Gomez Colmenarejo
Matthew W. Hoffman
David Pfau
Tom Schaul
Brendan Shillingford
Nando de Freitas
108
2,006
0
14 Jun 2016
Communication-Efficient Learning of Deep Networks from Decentralized Data
H. B. McMahan
Eider Moore
Daniel Ramage
S. Hampson
Blaise Agüera y Arcas
FedML
406
17,486
0
17 Feb 2016
Adam: A Method for Stochastic Optimization
Diederik P. Kingma
Jimmy Ba
ODL
1.8K
150,115
0
22 Dec 2014
Stochastic Compositional Gradient Descent: Algorithms for Minimizing Compositions of Expected-Value Functions
Mengdi Wang
Ethan X. Fang
Han Liu
96
264
0
14 Nov 2014
1