ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1512.02970
  4. Cited By
Efficient Distributed SGD with Variance Reduction

Efficient Distributed SGD with Variance Reduction

9 December 2015
Soham De
Tom Goldstein
ArXivPDFHTML

Papers citing "Efficient Distributed SGD with Variance Reduction"

7 / 7 papers shown
Title
Federated Stochastic Gradient Langevin Dynamics
Federated Stochastic Gradient Langevin Dynamics
Khaoula El Mekkaoui
Diego Mesquita
P. Blomstedt
Samuel Kaski
FedML
37
24
0
23 Apr 2020
Federated Variance-Reduced Stochastic Gradient Descent with Robustness
  to Byzantine Attacks
Federated Variance-Reduced Stochastic Gradient Descent with Robustness to Byzantine Attacks
Zhaoxian Wu
Qing Ling
Tianyi Chen
G. Giannakis
FedML
AAML
32
181
0
29 Dec 2019
Variance-Reduced Stochastic Learning under Random Reshuffling
Variance-Reduced Stochastic Learning under Random Reshuffling
Bicheng Ying
Kun Yuan
Ali H. Sayed
33
13
0
04 Aug 2017
Collaborative Deep Learning in Fixed Topology Networks
Collaborative Deep Learning in Fixed Topology Networks
Zhanhong Jiang
Aditya Balu
Chinmay Hegde
Soumik Sarkar
FedML
32
179
0
23 Jun 2017
Big Batch SGD: Automated Inference using Adaptive Batch Sizes
Big Batch SGD: Automated Inference using Adaptive Batch Sizes
Soham De
A. Yadav
David Jacobs
Tom Goldstein
ODL
37
62
0
18 Oct 2016
SCOPE: Scalable Composite Optimization for Learning on Spark
SCOPE: Scalable Composite Optimization for Learning on Spark
Shen-Yi Zhao
Ru Xiang
Yinghuan Shi
Peng Gao
Wu-Jun Li
32
16
0
30 Jan 2016
A Proximal Stochastic Gradient Method with Progressive Variance
  Reduction
A Proximal Stochastic Gradient Method with Progressive Variance Reduction
Lin Xiao
Tong Zhang
ODL
93
737
0
19 Mar 2014
1