Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1312.6186
Cited By
GPU Asynchronous Stochastic Gradient Descent to Speed Up Neural Network Training
21 December 2013
T. Paine
Hailin Jin
Jianchao Yang
Zhe-nan Lin
Thomas Huang
Re-assign community
ArXiv
PDF
HTML
Papers citing
"GPU Asynchronous Stochastic Gradient Descent to Speed Up Neural Network Training"
5 / 5 papers shown
Title
Optimizing Multi-GPU Parallelization Strategies for Deep Learning Training
Saptadeep Pal
Eiman Ebrahimi
A. Zulfiqar
Yaosheng Fu
Victor Zhang
Szymon Migacz
D. Nellans
Puneet Gupta
29
55
0
30 Jul 2019
Demystifying Parallel and Distributed Deep Learning: An In-Depth Concurrency Analysis
Tal Ben-Nun
Torsten Hoefler
GNN
24
701
0
26 Feb 2018
Asynchronous Parallel Stochastic Gradient for Nonconvex Optimization
Xiangru Lian
Yijun Huang
Y. Li
Ji Liu
25
498
0
27 Jun 2015
Deep Image: Scaling up Image Recognition
Ren Wu
Shengen Yan
Yi Shan
Qingqing Dang
Gang Sun
VLM
33
373
0
13 Jan 2015
Improving neural networks by preventing co-adaptation of feature detectors
Geoffrey E. Hinton
Nitish Srivastava
A. Krizhevsky
Ilya Sutskever
Ruslan Salakhutdinov
VLM
266
7,634
0
03 Jul 2012
1