ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2211.00889
  4. Cited By
Accelerating Parallel Stochastic Gradient Descent via Non-blocking
  Mini-batches

Accelerating Parallel Stochastic Gradient Descent via Non-blocking Mini-batches

2 November 2022
Haoze He
Parijat Dube
ArXivPDFHTML

Papers citing "Accelerating Parallel Stochastic Gradient Descent via Non-blocking Mini-batches"

3 / 3 papers shown
Title
Adjacent Leader Decentralized Stochastic Gradient Descent
Adjacent Leader Decentralized Stochastic Gradient Descent
Haoze He
Jing Wang
A. Choromańska
45
0
0
18 May 2024
On Efficient Training of Large-Scale Deep Learning Models: A Literature
  Review
On Efficient Training of Large-Scale Deep Learning Models: A Literature Review
Li Shen
Yan Sun
Zhiyuan Yu
Liang Ding
Xinmei Tian
Dacheng Tao
VLM
35
41
0
07 Apr 2023
Optimal Distributed Online Prediction using Mini-Batches
Optimal Distributed Online Prediction using Mini-Batches
O. Dekel
Ran Gilad-Bachrach
Ohad Shamir
Lin Xiao
186
683
0
07 Dec 2010
1