ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1609.06870
  4. Cited By
Distributed Training of Deep Neural Networks: Theoretical and Practical
  Limits of Parallel Scalability

Distributed Training of Deep Neural Networks: Theoretical and Practical Limits of Parallel Scalability

22 September 2016
J. Keuper
Franz-Josef Pfreundt
    GNN
ArXivPDFHTML

Papers citing "Distributed Training of Deep Neural Networks: Theoretical and Practical Limits of Parallel Scalability"

13 / 13 papers shown
Title
Asynchronous Stochastic Gradient Descent with Decoupled Backpropagation and Layer-Wise Updates
Asynchronous Stochastic Gradient Descent with Decoupled Backpropagation and Layer-Wise Updates
Cabrel Teguemne Fokam
Khaleelulla Khan Nazeer
Lukas König
David Kappel
Anand Subramoney
28
0
0
08 Oct 2024
Exploring shared memory architectures for end-to-end gigapixel deep
  learning
Exploring shared memory architectures for end-to-end gigapixel deep learning
Lucas W. Remedios
L. Cai
Samuel W. Remedios
Karthik Ramadass
Aravind Krishnan
...
C. Cui
Shunxing Bao
Lori A. Coburn
Yuankai Huo
Bennett A. Landman
MedIm
VLM
39
0
0
24 Apr 2023
Distributed Deep Reinforcement Learning: An Overview
Distributed Deep Reinforcement Learning: An Overview
Mohammad Reza Samsami
Hossein Alimadad
OffRL
14
27
0
22 Nov 2020
Which scaling rule applies to Artificial Neural Networks
Which scaling rule applies to Artificial Neural Networks
János Végh
29
9
0
15 May 2020
Do we know the operating principles of our computers better than those
  of our brain?
Do we know the operating principles of our computers better than those of our brain?
J. Végh
Ádám-József Berki
14
2
0
06 May 2020
Priority-based Parameter Propagation for Distributed DNN Training
Priority-based Parameter Propagation for Distributed DNN Training
Anand Jayarajan
Jinliang Wei
Garth A. Gibson
Alexandra Fedorova
Gennady Pekhimenko
AI4CE
16
178
0
10 May 2019
AI Enabling Technologies: A Survey
AI Enabling Technologies: A Survey
V. Gadepally
Justin A. Goodwin
J. Kepner
Albert Reuther
Hayley Reynolds
S. Samsi
Jonathan Su
David Martinez
27
24
0
08 May 2019
Anytime Stochastic Gradient Descent: A Time to Hear from all the Workers
Anytime Stochastic Gradient Descent: A Time to Hear from all the Workers
Nuwan S. Ferdinand
S. Draper
13
19
0
06 Oct 2018
Don't Use Large Mini-Batches, Use Local SGD
Don't Use Large Mini-Batches, Use Local SGD
Tao R. Lin
Sebastian U. Stich
Kumar Kshitij Patel
Martin Jaggi
54
429
0
22 Aug 2018
Distributed Deep Reinforcement Learning: Learn how to play Atari games
  in 21 minutes
Distributed Deep Reinforcement Learning: Learn how to play Atari games in 21 minutes
Igor Adamski
R. Adamski
T. Grel
Adam Jedrych
Kamil Kaczmarek
Henryk Michalewski
OffRL
25
37
0
09 Jan 2018
HPC Cloud for Scientific and Business Applications: Taxonomy, Vision,
  and Research Challenges
HPC Cloud for Scientific and Business Applications: Taxonomy, Vision, and Research Challenges
M. Netto
R. Calheiros
Eduardo Rodrigues
R. L. F. Cunha
Rajkumar Buyya
54
72
0
24 Oct 2017
What does fault tolerant Deep Learning need from MPI?
What does fault tolerant Deep Learning need from MPI?
Vinay C. Amatya
Abhinav Vishnu
Charles Siegel
J. Daily
19
19
0
11 Sep 2017
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
284
2,890
0
15 Sep 2016
1