ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1806.07353
  4. Cited By
Faster SGD training by minibatch persistency

Faster SGD training by minibatch persistency

19 June 2018
M. Fischetti
Iacopo Mandatelli
Domenico Salvagnin
ArXivPDFHTML

Papers citing "Faster SGD training by minibatch persistency"

4 / 4 papers shown
Title
A Differential Equation Approach for Wasserstein GANs and Beyond
A Differential Equation Approach for Wasserstein GANs and Beyond
Zachariah Malik
Yu-Jui Huang
24
0
0
25 May 2024
Faster Neural Network Training with Data Echoing
Faster Neural Network Training with Data Echoing
Dami Choi
Alexandre Passos
Christopher J. Shallue
George E. Dahl
23
48
0
12 Jul 2019
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
310
2,896
0
15 Sep 2016
Improving neural networks by preventing co-adaptation of feature
  detectors
Improving neural networks by preventing co-adaptation of feature detectors
Geoffrey E. Hinton
Nitish Srivastava
A. Krizhevsky
Ilya Sutskever
Ruslan Salakhutdinov
VLM
266
7,639
0
03 Jul 2012
1