ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.12648
  4. Cited By
Convergence of Distributed Stochastic Variance Reduced Methods without
  Sampling Extra Data

Convergence of Distributed Stochastic Variance Reduced Methods without Sampling Extra Data

29 May 2019
Shicong Cen
Huishuai Zhang
Yuejie Chi
Wei-neng Chen
Tie-Yan Liu
    FedML
ArXivPDFHTML

Papers citing "Convergence of Distributed Stochastic Variance Reduced Methods without Sampling Extra Data"

11 / 11 papers shown
Title
SPIDER: Near-Optimal Non-Convex Optimization via Stochastic Path
  Integrated Differential Estimator
SPIDER: Near-Optimal Non-Convex Optimization via Stochastic Path Integrated Differential Estimator
Cong Fang
C. J. Li
Zhouchen Lin
Tong Zhang
85
577
0
04 Jul 2018
Deep Gradient Compression: Reducing the Communication Bandwidth for
  Distributed Training
Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training
Chengyue Wu
Song Han
Huizi Mao
Yu Wang
W. Dally
120
1,407
0
05 Dec 2017
Gradient Sparsification for Communication-Efficient Distributed
  Optimization
Gradient Sparsification for Communication-Efficient Distributed Optimization
Jianqiao Wangni
Jialei Wang
Ji Liu
Tong Zhang
72
525
0
26 Oct 2017
TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep
  Learning
TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep Learning
W. Wen
Cong Xu
Feng Yan
Chunpeng Wu
Yandan Wang
Yiran Chen
Hai Helen Li
137
987
0
22 May 2017
CoCoA: A General Framework for Communication-Efficient Distributed
  Optimization
CoCoA: A General Framework for Communication-Efficient Distributed Optimization
Virginia Smith
Simone Forte
Chenxin Ma
Martin Takáč
Michael I. Jordan
Martin Jaggi
66
273
0
07 Nov 2016
AIDE: Fast and Communication Efficient Distributed Optimization
AIDE: Fast and Communication Efficient Distributed Optimization
Sashank J. Reddi
Jakub Konecný
Peter Richtárik
Barnabás Póczós
Alex Smola
53
151
0
24 Aug 2016
Federated Optimization:Distributed Optimization Beyond the Datacenter
Federated Optimization:Distributed Optimization Beyond the Datacenter
Jakub Konecný
H. B. McMahan
Daniel Ramage
FedML
113
735
0
11 Nov 2015
SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly
  Convex Composite Objectives
SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives
Aaron Defazio
Francis R. Bach
Simon Lacoste-Julien
ODL
128
1,823
0
01 Jul 2014
A Proximal Stochastic Gradient Method with Progressive Variance
  Reduction
A Proximal Stochastic Gradient Method with Progressive Variance Reduction
Lin Xiao
Tong Zhang
ODL
147
738
0
19 Mar 2014
Minimizing Finite Sums with the Stochastic Average Gradient
Minimizing Finite Sums with the Stochastic Average Gradient
Mark Schmidt
Nicolas Le Roux
Francis R. Bach
291
1,246
0
10 Sep 2013
Stochastic Dual Coordinate Ascent Methods for Regularized Loss
  Minimization
Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization
Shai Shalev-Shwartz
Tong Zhang
155
1,032
0
10 Sep 2012
1