ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1106.5730
  4. Cited By
HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient
  Descent

HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent

28 June 2011
Feng Niu
Benjamin Recht
Christopher Ré
Stephen J. Wright
ArXivPDFHTML

Papers citing "HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent"

19 / 19 papers shown
Title
Ringmaster ASGD: The First Asynchronous SGD with Optimal Time Complexity
Ringmaster ASGD: The First Asynchronous SGD with Optimal Time Complexity
Artavazd Maranjyan
Alexander Tyurin
Peter Richtárik
81
4
0
27 Jan 2025
Revisiting Reliability in Large-Scale Machine Learning Research Clusters
Revisiting Reliability in Large-Scale Machine Learning Research Clusters
Apostolos Kokolis
Michael Kuchnik
John Hoffman
Adithya Kumar
Parth Malani
Faye Ma
Zachary DeVito
Siyang Song
Kalyan Saladi
Carole-Jean Wu
296
9
0
29 Oct 2024
Asynchronous Stochastic Gradient Descent with Decoupled Backpropagation and Layer-Wise Updates
Asynchronous Stochastic Gradient Descent with Decoupled Backpropagation and Layer-Wise Updates
Cabrel Teguemne Fokam
Khaleelulla Khan Nazeer
Lukas König
David Kappel
Anand Subramoney
59
0
0
08 Oct 2024
Ordered Momentum for Asynchronous SGD
Ordered Momentum for Asynchronous SGD
Chang-Wei Shi
Yi-Rui Yang
Wu-Jun Li
ODL
133
0
0
27 Jul 2024
Stochastic Polyak Step-sizes and Momentum: Convergence Guarantees and Practical Performance
Stochastic Polyak Step-sizes and Momentum: Convergence Guarantees and Practical Performance
Dimitris Oikonomou
Nicolas Loizou
66
6
0
06 Jun 2024
SGD for Structured Nonconvex Functions: Learning Rates, Minibatching and
  Interpolation
SGD for Structured Nonconvex Functions: Learning Rates, Minibatching and Interpolation
Robert Mansel Gower
Othmane Sebbouh
Nicolas Loizou
86
75
0
18 Jun 2020
The Implicit Regularization of Stochastic Gradient Flow for Least
  Squares
The Implicit Regularization of Stochastic Gradient Flow for Least Squares
Alnur Ali
Yan Sun
Robert Tibshirani
65
77
0
17 Mar 2020
Communication optimization strategies for distributed deep neural
  network training: A survey
Communication optimization strategies for distributed deep neural network training: A survey
Shuo Ouyang
Dezun Dong
Yemao Xu
Liquan Xiao
77
12
0
06 Mar 2020
Deep Learning at the Edge
Deep Learning at the Edge
Sahar Voghoei
N. Tonekaboni
Jason G. Wallace
H. Arabnia
138
41
0
22 Oct 2019
Integrated Model, Batch and Domain Parallelism in Training Neural
  Networks
Integrated Model, Batch and Domain Parallelism in Training Neural Networks
A. Gholami
A. Azad
Peter H. Jin
Kurt Keutzer
A. Buluç
79
84
0
12 Dec 2017
Deep Learning in the Automotive Industry: Applications and Tools
Deep Learning in the Automotive Industry: Applications and Tools
André Luckow
M. Cook
Nathan Ashcraft
Edwin Weill
Emil Djerekarov
Bennie Vorster
95
118
0
30 Apr 2017
Heterogeneous Information Network Embedding for Meta Path based
  Proximity
Heterogeneous Information Network Embedding for Meta Path based Proximity
Zhipeng Huang
N. Mamoulis
GNN
46
110
0
19 Jan 2017
Fast and Reliable Parameter Estimation from Nonlinear Observations
Fast and Reliable Parameter Estimation from Nonlinear Observations
Samet Oymak
Mahdi Soltanolkotabi
191
25
0
23 Oct 2016
Asynchronous Stochastic Gradient Descent with Delay Compensation
Asynchronous Stochastic Gradient Descent with Delay Compensation
Shuxin Zheng
Qi Meng
Taifeng Wang
Wei Chen
Nenghai Yu
Zhiming Ma
Tie-Yan Liu
104
314
0
27 Sep 2016
Coordinate Friendly Structures, Algorithms and Applications
Coordinate Friendly Structures, Algorithms and Applications
Zhimin Peng
Tianyu Wu
Yangyang Xu
Ming Yan
W. Yin
84
74
0
05 Jan 2016
Optimal Distributed Online Prediction using Mini-Batches
Optimal Distributed Online Prediction using Mini-Batches
O. Dekel
Ran Gilad-Bachrach
Ohad Shamir
Lin Xiao
265
685
0
07 Dec 2010
Slow Learners are Fast
Slow Learners are Fast
John Langford
Alex Smola
Martin A. Zinkevich
111
391
0
03 Nov 2009
Sparse Online Learning via Truncated Gradient
Sparse Online Learning via Truncated Gradient
John Langford
Lihong Li
Tong Zhang
151
486
0
28 Jun 2008
Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear
  Norm Minimization
Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization
Benjamin Recht
Maryam Fazel
P. Parrilo
416
3,766
0
28 Jun 2007
1