ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1711.00705
  4. Cited By
Efficient Training of Convolutional Neural Nets on Large Distributed
  Systems

Efficient Training of Convolutional Neural Nets on Large Distributed Systems

2 November 2017
Sameer Kumar
D. Sreedhar
Vaibhav Saxena
Yogish Sabharwal
Ashish Verma
ArXiv (abs)PDFHTML

Papers citing "Efficient Training of Convolutional Neural Nets on Large Distributed Systems"

2 / 2 papers shown
Title
Faster Neural Network Training with Data Echoing
Faster Neural Network Training with Data Echoing
Dami Choi
Alexandre Passos
Christopher J. Shallue
George E. Dahl
97
50
0
12 Jul 2019
Parallax: Sparsity-aware Data Parallel Training of Deep Neural Networks
Parallax: Sparsity-aware Data Parallel Training of Deep Neural Networks
Soojeong Kim
Gyeong-In Yu
Hojin Park
Sungwoo Cho
Eunji Jeong
Hyeonmin Ha
Sanha Lee
Joo Seong Jeong
Byung-Gon Chun
67
75
0
08 Aug 2018
1