ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1603.05544
  4. Cited By
Accelerating Deep Neural Network Training with Inconsistent Stochastic
  Gradient Descent

Accelerating Deep Neural Network Training with Inconsistent Stochastic Gradient Descent

17 March 2016
Linnan Wang
Yi Yang
Martin Renqiang Min
S. Chakradhar
ArXivPDFHTML

Papers citing "Accelerating Deep Neural Network Training with Inconsistent Stochastic Gradient Descent"

6 / 6 papers shown
Title
Multiple Importance Sampling for Stochastic Gradient Estimation
Multiple Importance Sampling for Stochastic Gradient Estimation
Corentin Salaün
Xingchang Huang
Iliyan Georgiev
Niloy J. Mitra
Gurprit Singh
32
1
0
22 Jul 2024
Block-term Tensor Neural Networks
Block-term Tensor Neural Networks
Jinmian Ye
Guangxi Li
Di Chen
Haiqin Yang
Shandian Zhe
Zenglin Xu
24
30
0
10 Oct 2020
Dynamic Stale Synchronous Parallel Distributed Training for Deep
  Learning
Dynamic Stale Synchronous Parallel Distributed Training for Deep Learning
Xing Zhao
Aijun An
Junfeng Liu
B. Chen
26
57
0
16 Aug 2019
SuperNeurons: Dynamic GPU Memory Management for Training Deep Neural
  Networks
SuperNeurons: Dynamic GPU Memory Management for Training Deep Neural Networks
Linnan Wang
Jinmian Ye
Yiyang Zhao
Wei Wu
Ang Li
Shuaiwen Leon Song
Zenglin Xu
Tim Kraska
3DH
46
264
0
13 Jan 2018
Learning Compact Recurrent Neural Networks with Block-Term Tensor
  Decomposition
Learning Compact Recurrent Neural Networks with Block-Term Tensor Decomposition
Jinmian Ye
Linnan Wang
Guangxi Li
Di Chen
Shandian Zhe
Xinqi Chu
Zenglin Xu
29
132
0
14 Dec 2017
Optimal Distributed Online Prediction using Mini-Batches
Optimal Distributed Online Prediction using Mini-Batches
O. Dekel
Ran Gilad-Bachrach
Ohad Shamir
Lin Xiao
179
683
0
07 Dec 2010
1