ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1312.6186
  4. Cited By
GPU Asynchronous Stochastic Gradient Descent to Speed Up Neural Network
  Training

GPU Asynchronous Stochastic Gradient Descent to Speed Up Neural Network Training

21 December 2013
T. Paine
Hailin Jin
Jianchao Yang
Zhe-nan Lin
Thomas Huang
ArXivPDFHTML

Papers citing "GPU Asynchronous Stochastic Gradient Descent to Speed Up Neural Network Training"

5 / 5 papers shown
Title
Optimizing Multi-GPU Parallelization Strategies for Deep Learning
  Training
Optimizing Multi-GPU Parallelization Strategies for Deep Learning Training
Saptadeep Pal
Eiman Ebrahimi
A. Zulfiqar
Yaosheng Fu
Victor Zhang
Szymon Migacz
D. Nellans
Puneet Gupta
29
55
0
30 Jul 2019
Demystifying Parallel and Distributed Deep Learning: An In-Depth
  Concurrency Analysis
Demystifying Parallel and Distributed Deep Learning: An In-Depth Concurrency Analysis
Tal Ben-Nun
Torsten Hoefler
GNN
24
701
0
26 Feb 2018
Asynchronous Parallel Stochastic Gradient for Nonconvex Optimization
Asynchronous Parallel Stochastic Gradient for Nonconvex Optimization
Xiangru Lian
Yijun Huang
Y. Li
Ji Liu
25
498
0
27 Jun 2015
Deep Image: Scaling up Image Recognition
Ren Wu
Shengen Yan
Yi Shan
Qingqing Dang
Gang Sun
VLM
33
373
0
13 Jan 2015
Improving neural networks by preventing co-adaptation of feature
  detectors
Improving neural networks by preventing co-adaptation of feature detectors
Geoffrey E. Hinton
Nitish Srivastava
A. Krizhevsky
Ilya Sutskever
Ruslan Salakhutdinov
VLM
266
7,634
0
03 Jul 2012
1