ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1903.06681
  4. Cited By
Improving Strong-Scaling of CNN Training by Exploiting Finer-Grained
  Parallelism

Improving Strong-Scaling of CNN Training by Exploiting Finer-Grained Parallelism

15 March 2019
Nikoli Dryden
N. Maruyama
Tom Benson
Tim Moon
M. Snir
B. Van Essen
ArXivPDFHTML

Papers citing "Improving Strong-Scaling of CNN Training by Exploiting Finer-Grained Parallelism"

13 / 13 papers shown
Title
Pex: Memory-efficient Microcontroller Deep Learning through Partial
  Execution
Pex: Memory-efficient Microcontroller Deep Learning through Partial Execution
Edgar Liberis
Nicholas D. Lane
23
3
0
30 Nov 2022
HammingMesh: A Network Topology for Large-Scale Deep Learning
HammingMesh: A Network Topology for Large-Scale Deep Learning
Torsten Hoefler
Tommaso Bonato
Daniele De Sensi
Salvatore Di Girolamo
Shigang Li
Marco Heddes
Jon Belk
Deepak Goel
Miguel Castro
Steve Scott
3DH
GNN
AI4CE
29
20
0
03 Sep 2022
Design Automation for Fast, Lightweight, and Effective Deep Learning
  Models: A Survey
Design Automation for Fast, Lightweight, and Effective Deep Learning Models: A Survey
Dalin Zhang
Kaixuan Chen
Yan Zhao
B. Yang
Li-Ping Yao
Christian S. Jensen
43
3
0
22 Aug 2022
An Oracle for Guiding Large-Scale Model/Hybrid Parallel Training of
  Convolutional Neural Networks
An Oracle for Guiding Large-Scale Model/Hybrid Parallel Training of Convolutional Neural Networks
A. Kahira
Truong Thao Nguyen
L. Bautista-Gomez
Ryousei Takano
Rosa M. Badia
M. Wahib
15
9
0
19 Apr 2021
Towards a Scalable and Distributed Infrastructure for Deep Learning
  Applications
Towards a Scalable and Distributed Infrastructure for Deep Learning Applications
Bita Hasheminezhad
S. Shirzad
Nanmiao Wu
Patrick Diehl
Hannes Schulz
Hartmut Kaiser
GNN
AI4CE
27
4
0
06 Oct 2020
The Case for Strong Scaling in Deep Learning: Training Large 3D CNNs
  with Hybrid Parallelism
The Case for Strong Scaling in Deep Learning: Training Large 3D CNNs with Hybrid Parallelism
Yosuke Oyama
N. Maruyama
Nikoli Dryden
Erin McCarthy
P. Harrington
J. Balewski
Satoshi Matsuoka
Peter Nugent
B. Van Essen
3DV
AI4CE
32
37
0
25 Jul 2020
ICA-UNet: ICA Inspired Statistical UNet for Real-time 3D Cardiac Cine
  MRI Segmentation
ICA-UNet: ICA Inspired Statistical UNet for Real-time 3D Cardiac Cine MRI Segmentation
Tianchen Wang
Xiaowei Xu
Jinjun Xiong
Qianjun Jia
Haiyun Yuan
Meiping Huang
Jian Zhuang
Yiyu Shi
12
21
0
18 Jul 2020
Data Movement Is All You Need: A Case Study on Optimizing Transformers
Data Movement Is All You Need: A Case Study on Optimizing Transformers
A. Ivanov
Nikoli Dryden
Tal Ben-Nun
Shigang Li
Torsten Hoefler
36
131
0
30 Jun 2020
Reducing Communication in Graph Neural Network Training
Reducing Communication in Graph Neural Network Training
Alok Tripathy
Katherine Yelick
A. Buluç
GNN
24
104
0
07 May 2020
Optimal checkpointing for heterogeneous chains: how to train deep neural
  networks with limited memory
Optimal checkpointing for heterogeneous chains: how to train deep neural networks with limited memory
Julien Herrmann
Olivier Beaumont
Lionel Eyraud-Dubois
J. Herrmann
Alexis Joly
Alena Shilova
BDL
23
29
0
27 Nov 2019
Demystifying Parallel and Distributed Deep Learning: An In-Depth
  Concurrency Analysis
Demystifying Parallel and Distributed Deep Learning: An In-Depth Concurrency Analysis
Tal Ben-Nun
Torsten Hoefler
GNN
33
701
0
26 Feb 2018
A Survey on Deep Learning in Medical Image Analysis
A Survey on Deep Learning in Medical Image Analysis
G. Litjens
Thijs Kooi
B. Bejnordi
A. Setio
F. Ciompi
Mohsen Ghafoorian
Jeroen van der Laak
Bram van Ginneken
C. I. Sánchez
OOD
295
10,618
0
19 Feb 2017
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
299
2,890
0
15 Sep 2016
1