ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2102.09032
  4. Cited By
Consistent Lock-free Parallel Stochastic Gradient Descent for Fast and
  Stable Convergence

Consistent Lock-free Parallel Stochastic Gradient Descent for Fast and Stable Convergence

17 February 2021
Karl Bäckström
Ivan Walulya
Marina Papatriantafilou
P. Tsigas
ArXivPDFHTML

Papers citing "Consistent Lock-free Parallel Stochastic Gradient Descent for Fast and Stable Convergence"

2 / 2 papers shown
Title
Adaptive Elastic Training for Sparse Deep Learning on Heterogeneous
  Multi-GPU Servers
Adaptive Elastic Training for Sparse Deep Learning on Heterogeneous Multi-GPU Servers
Yujing Ma
Florin Rusu
Kesheng Wu
A. Sim
46
3
0
13 Oct 2021
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
308
2,890
0
15 Sep 2016
1