ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1509.04210
  4. Cited By
Model Accuracy and Runtime Tradeoff in Distributed Deep Learning:A
  Systematic Study

Model Accuracy and Runtime Tradeoff in Distributed Deep Learning:A Systematic Study

14 September 2015
Suyog Gupta
Wei Zhang
Fei Wang
ArXivPDFHTML

Papers citing "Model Accuracy and Runtime Tradeoff in Distributed Deep Learning:A Systematic Study"

21 / 21 papers shown
Title
Taming Resource Heterogeneity In Distributed ML Training With Dynamic
  Batching
Taming Resource Heterogeneity In Distributed ML Training With Dynamic Batching
S. Tyagi
Prateek Sharma
24
22
0
20 May 2023
STSyn: Speeding Up Local SGD with Straggler-Tolerant Synchronization
STSyn: Speeding Up Local SGD with Straggler-Tolerant Synchronization
Feng Zhu
Jingjing Zhang
Xin Wang
33
3
0
06 Oct 2022
Byzantine Fault Tolerance in Distributed Machine Learning : a Survey
Byzantine Fault Tolerance in Distributed Machine Learning : a Survey
Djamila Bouhata
Hamouma Moumen
Moumen Hamouma
Ahcène Bounceur
AI4CE
27
7
0
05 May 2022
DSAG: A mixed synchronous-asynchronous iterative method for
  straggler-resilient learning
DSAG: A mixed synchronous-asynchronous iterative method for straggler-resilient learning
A. Severinson
E. Rosnes
S. E. Rouayheb
Alexandre Graell i Amat
22
2
0
27 Nov 2021
SpreadGNN: Serverless Multi-task Federated Learning for Graph Neural
  Networks
SpreadGNN: Serverless Multi-task Federated Learning for Graph Neural Networks
Chaoyang He
Emir Ceyani
Keshav Balasubramanian
M. Annavaram
Salman Avestimehr
FedML
25
50
0
04 Jun 2021
ScaleFreeCTR: MixCache-based Distributed Training System for CTR Models
  with Huge Embedding Table
ScaleFreeCTR: MixCache-based Distributed Training System for CTR Models with Huge Embedding Table
Huifeng Guo
Wei Guo
Yong Gao
Ruiming Tang
Xiuqiang He
Wenzhi Liu
38
20
0
17 Apr 2021
Consistent Lock-free Parallel Stochastic Gradient Descent for Fast and
  Stable Convergence
Consistent Lock-free Parallel Stochastic Gradient Descent for Fast and Stable Convergence
Karl Bäckström
Ivan Walulya
Marina Papatriantafilou
P. Tsigas
29
5
0
17 Feb 2021
Communication optimization strategies for distributed deep neural
  network training: A survey
Communication optimization strategies for distributed deep neural network training: A survey
Shuo Ouyang
Dezun Dong
Yemao Xu
Liquan Xiao
30
12
0
06 Mar 2020
Database Meets Deep Learning: Challenges and Opportunities
Database Meets Deep Learning: Challenges and Opportunities
Wei Wang
Meihui Zhang
Gang Chen
H. V. Jagadish
Beng Chin Ooi
K. Tan
13
147
0
21 Jun 2019
MATCHA: Speeding Up Decentralized SGD via Matching Decomposition
  Sampling
MATCHA: Speeding Up Decentralized SGD via Matching Decomposition Sampling
Jianyu Wang
Anit Kumar Sahu
Zhouyi Yang
Gauri Joshi
S. Kar
29
159
0
23 May 2019
MD-GAN: Multi-Discriminator Generative Adversarial Networks for
  Distributed Datasets
MD-GAN: Multi-Discriminator Generative Adversarial Networks for Distributed Datasets
Corentin Hardy
Erwan Le Merrer
B. Sericola
GAN
27
181
0
09 Nov 2018
Adaptive Communication Strategies to Achieve the Best Error-Runtime
  Trade-off in Local-Update SGD
Adaptive Communication Strategies to Achieve the Best Error-Runtime Trade-off in Local-Update SGD
Jianyu Wang
Gauri Joshi
FedML
33
231
0
19 Oct 2018
Cooperative SGD: A unified Framework for the Design and Analysis of
  Communication-Efficient SGD Algorithms
Cooperative SGD: A unified Framework for the Design and Analysis of Communication-Efficient SGD Algorithms
Jianyu Wang
Gauri Joshi
33
348
0
22 Aug 2018
Parallax: Sparsity-aware Data Parallel Training of Deep Neural Networks
Parallax: Sparsity-aware Data Parallel Training of Deep Neural Networks
Soojeong Kim
Gyeong-In Yu
Hojin Park
Sungwoo Cho
Eunji Jeong
Hyeonmin Ha
Sanha Lee
Joo Seong Jeong
Byung-Gon Chun
23
73
0
08 Aug 2018
Training LSTM Networks with Resistive Cross-Point Devices
Training LSTM Networks with Resistive Cross-Point Devices
Tayfun Gokmen
Malte J. Rasch
W. Haensch
6
45
0
01 Jun 2018
Deep Learning in Mobile and Wireless Networking: A Survey
Deep Learning in Mobile and Wireless Networking: A Survey
Chaoyun Zhang
P. Patras
Hamed Haddadi
45
1,304
0
12 Mar 2018
Slow and Stale Gradients Can Win the Race: Error-Runtime Trade-offs in
  Distributed SGD
Slow and Stale Gradients Can Win the Race: Error-Runtime Trade-offs in Distributed SGD
Sanghamitra Dutta
Gauri Joshi
Soumyadip Ghosh
Parijat Dube
P. Nagpurkar
31
193
0
03 Mar 2018
Demystifying Parallel and Distributed Deep Learning: An In-Depth
  Concurrency Analysis
Demystifying Parallel and Distributed Deep Learning: An In-Depth Concurrency Analysis
Tal Ben-Nun
Torsten Hoefler
GNN
33
702
0
26 Feb 2018
Collaborative Deep Learning in Fixed Topology Networks
Collaborative Deep Learning in Fixed Topology Networks
Zhanhong Jiang
Aditya Balu
C. Hegde
S. Sarkar
FedML
21
179
0
23 Jun 2017
Training Deep Convolutional Neural Networks with Resistive Cross-Point
  Devices
Training Deep Convolutional Neural Networks with Resistive Cross-Point Devices
Tayfun Gokmen
M. Onen
W. Haensch
30
139
0
22 May 2017
The Effects of Hyperparameters on SGD Training of Neural Networks
The Effects of Hyperparameters on SGD Training of Neural Networks
Thomas Breuel
72
63
0
12 Aug 2015
1