Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1312.7853
Cited By
Communication Efficient Distributed Optimization using an Approximate Newton-type Method
30 December 2013
Ohad Shamir
Nathan Srebro
Tong Zhang
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Communication Efficient Distributed Optimization using an Approximate Newton-type Method"
9 / 9 papers shown
Title
Revisiting LocalSGD and SCAFFOLD: Improved Rates and Missing Analysis
Ruichen Luo
Sebastian U Stich
Samuel Horváth
Martin Takáč
71
0
0
08 Jan 2025
Distributed Event-Based Learning via ADMM
Güner Dilsad Er
Sebastian Trimpe
Michael Muehlebach
FedML
64
2
0
17 May 2024
Concentration of Non-Isotropic Random Tensors with Applications to Learning and Empirical Risk Minimization
Mathieu Even
Laurent Massoulié
41
14
0
04 Feb 2021
Mime: Mimicking Centralized Stochastic Algorithms in Federated Learning
Sai Praneeth Karimireddy
Martin Jaggi
Satyen Kale
M. Mohri
Sashank J. Reddi
Sebastian U. Stich
A. Suresh
FedML
66
217
0
08 Aug 2020
Federated Transfer Learning with Dynamic Gradient Aggregation
Dimitrios Dimitriadis
K. Kumatani
R. Gmyr
Yashesh Gaur
Sefik Emre Eskimez
FedML
48
15
0
06 Aug 2020
Stochastic Channel-Based Federated Learning for Medical Data Privacy Preserving
Rulin Shao
Hongyu Hè
Hui Liu
Dianbo Liu
FedML
OOD
137
13
0
23 Oct 2019
Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization
Shai Shalev-Shwartz
Tong Zhang
112
1,031
0
10 Sep 2012
A Reliable Effective Terascale Linear Learning System
Alekh Agarwal
O. Chapelle
Miroslav Dudík
John Langford
76
418
0
19 Oct 2011
Better Mini-Batch Algorithms via Accelerated Gradient Methods
Andrew Cotter
Ohad Shamir
Nathan Srebro
Karthik Sridharan
ODL
93
313
0
22 Jun 2011
1