ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1702.06269
  4. Cited By
Memory and Communication Efficient Distributed Stochastic Optimization
  with Minibatch-Prox

Memory and Communication Efficient Distributed Stochastic Optimization with Minibatch-Prox

21 February 2017
Jialei Wang
Weiran Wang
Nathan Srebro
ArXivPDFHTML

Papers citing "Memory and Communication Efficient Distributed Stochastic Optimization with Minibatch-Prox"

14 / 14 papers shown
Title
Sharper Analysis for Minibatch Stochastic Proximal Point Methods:
  Stability, Smoothness, and Deviation
Sharper Analysis for Minibatch Stochastic Proximal Point Methods: Stability, Smoothness, and Deviation
Xiao-Tong Yuan
P. Li
34
2
0
09 Jan 2023
Uniform Stability for First-Order Empirical Risk Minimization
Uniform Stability for First-Order Empirical Risk Minimization
Amit Attia
Tomer Koren
20
5
0
17 Jul 2022
Is Local SGD Better than Minibatch SGD?
Is Local SGD Better than Minibatch SGD?
Blake E. Woodworth
Kumar Kshitij Patel
Sebastian U. Stich
Zhen Dai
Brian Bullins
H. B. McMahan
Ohad Shamir
Nathan Srebro
FedML
34
253
0
18 Feb 2020
Communication-Efficient Accurate Statistical Estimation
Communication-Efficient Accurate Statistical Estimation
Jianqing Fan
Yongyi Guo
Kaizheng Wang
11
110
0
12 Jun 2019
Generalized Inverse Optimization through Online Learning
Generalized Inverse Optimization through Online Learning
Chaosheng Dong
Yiran Chen
Bo Zeng
9
43
0
03 Oct 2018
Don't Use Large Mini-Batches, Use Local SGD
Don't Use Large Mini-Batches, Use Local SGD
Tao R. Lin
Sebastian U. Stich
Kumar Kshitij Patel
Martin Jaggi
51
429
0
22 Aug 2018
The Effect of Network Width on the Performance of Large-batch Training
The Effect of Network Width on the Performance of Large-batch Training
Lingjiao Chen
Hongyi Wang
Jinman Zhao
Dimitris Papailiopoulos
Paraschos Koutris
13
22
0
11 Jun 2018
Double Quantization for Communication-Efficient Distributed Optimization
Double Quantization for Communication-Efficient Distributed Optimization
Yue Yu
Jiaxiang Wu
Longbo Huang
MQ
19
57
0
25 May 2018
Gradient Sparsification for Communication-Efficient Distributed
  Optimization
Gradient Sparsification for Communication-Efficient Distributed Optimization
Jianqiao Wangni
Jialei Wang
Ji Liu
Tong Zhang
15
522
0
26 Oct 2017
Stochastic Nonconvex Optimization with Large Minibatches
Stochastic Nonconvex Optimization with Large Minibatches
Weiran Wang
Nathan Srebro
36
26
0
25 Sep 2017
On the convergence properties of a $K$-step averaging stochastic
  gradient descent algorithm for nonconvex optimization
On the convergence properties of a KKK-step averaging stochastic gradient descent algorithm for nonconvex optimization
Fan Zhou
Guojing Cong
32
232
0
03 Aug 2017
A Proximal Stochastic Gradient Method with Progressive Variance
  Reduction
A Proximal Stochastic Gradient Method with Progressive Variance Reduction
Lin Xiao
Tong Zhang
ODL
84
736
0
19 Mar 2014
A simpler approach to obtaining an O(1/t) convergence rate for the
  projected stochastic subgradient method
A simpler approach to obtaining an O(1/t) convergence rate for the projected stochastic subgradient method
Simon Lacoste-Julien
Mark W. Schmidt
Francis R. Bach
126
259
0
10 Dec 2012
Optimal Distributed Online Prediction using Mini-Batches
Optimal Distributed Online Prediction using Mini-Batches
O. Dekel
Ran Gilad-Bachrach
Ohad Shamir
Lin Xiao
177
683
0
07 Dec 2010
1