ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1708.05256
  4. Cited By
Deep Learning at 15PF: Supervised and Semi-Supervised Classification for
  Scientific Data

Deep Learning at 15PF: Supervised and Semi-Supervised Classification for Scientific Data

17 August 2017
Thorsten Kurth
Jian Zhang
N. Satish
Ioannis Mitliagkas
Evan Racah
M. Patwary
T. Malas
N. Sundaram
W. Bhimji
Mikhail E. Smorkalov
J. Deslippe
Mikhail Shiryaev
Srinivas Sridharan
P. Prabhat
Pradeep Dubey
ArXivPDFHTML

Papers citing "Deep Learning at 15PF: Supervised and Semi-Supervised Classification for Scientific Data"

14 / 14 papers shown
Title
Identifying the atmospheric drivers of drought and heat using a smoothed
  deep learning approach
Identifying the atmospheric drivers of drought and heat using a smoothed deep learning approach
M. Mittermeier
M. Weigert
David Rügamer
31
0
0
09 Nov 2021
Invariance Principle Meets Information Bottleneck for
  Out-of-Distribution Generalization
Invariance Principle Meets Information Bottleneck for Out-of-Distribution Generalization
Kartik Ahuja
Ethan Caballero
Dinghuai Zhang
Jean-Christophe Gagnon-Audet
Yoshua Bengio
Ioannis Mitliagkas
Irina Rish
OOD
15
253
0
11 Jun 2021
AIPerf: Automated machine learning as an AI-HPC benchmark
AIPerf: Automated machine learning as an AI-HPC benchmark
Zhixiang Ren
Yongheng Liu
Tianhui Shi
Lei Xie
Yue Zhou
Jidong Zhai
Youhui Zhang
Yunquan Zhang
Wenguang Chen
27
22
0
17 Aug 2020
Reducing Data Motion to Accelerate the Training of Deep Neural Networks
Reducing Data Motion to Accelerate the Training of Deep Neural Networks
Sicong Zhuang
Cristiano Malossi
Marc Casas
27
0
0
05 Apr 2020
A Survey on Distributed Machine Learning
A Survey on Distributed Machine Learning
Joost Verbraeken
Matthijs Wolting
Jonathan Katzy
Jeroen Kloppenburg
Tim Verbelen
Jan S. Rellermeyer
OOD
42
692
0
20 Dec 2019
PipeMare: Asynchronous Pipeline Parallel DNN Training
PipeMare: Asynchronous Pipeline Parallel DNN Training
Bowen Yang
Jian Zhang
Jonathan Li
Christopher Ré
Christopher R. Aberger
Christopher De Sa
22
110
0
09 Oct 2019
AI Enabling Technologies: A Survey
AI Enabling Technologies: A Survey
V. Gadepally
Justin A. Goodwin
J. Kepner
Albert Reuther
Hayley Reynolds
S. Samsi
Jonathan Su
David Martinez
27
24
0
08 May 2019
Improving Strong-Scaling of CNN Training by Exploiting Finer-Grained
  Parallelism
Improving Strong-Scaling of CNN Training by Exploiting Finer-Grained Parallelism
Nikoli Dryden
N. Maruyama
Tom Benson
Tim Moon
M. Snir
B. Van Essen
26
49
0
15 Mar 2019
Augment your batch: better training with larger batches
Augment your batch: better training with larger batches
Elad Hoffer
Tal Ben-Nun
Itay Hubara
Niv Giladi
Torsten Hoefler
Daniel Soudry
ODL
30
72
0
27 Jan 2019
Demystifying Parallel and Distributed Deep Learning: An In-Depth
  Concurrency Analysis
Demystifying Parallel and Distributed Deep Learning: An In-Depth Concurrency Analysis
Tal Ben-Nun
Torsten Hoefler
GNN
33
704
0
26 Feb 2018
A Spatial Mapping Algorithm with Applications in Deep Learning-Based
  Structure Classification
A Spatial Mapping Algorithm with Applications in Deep Learning-Based Structure Classification
T. Corcoran
R. Zamora-Resendiz
Xinlian Liu
S. Crivelli
3DPC
3DV
18
16
0
07 Feb 2018
On Scale-out Deep Learning Training for Cloud and HPC
On Scale-out Deep Learning Training for Cloud and HPC
Srinivas Sridharan
K. Vaidyanathan
Dhiraj D. Kalamkar
Dipankar Das
Mikhail E. Smorkalov
...
Dheevatsa Mudigere
Naveen Mellempudi
Sasikanth Avancha
Bharat Kaul
Pradeep Dubey
BDL
26
30
0
24 Jan 2018
Scale out for large minibatch SGD: Residual network training on
  ImageNet-1K with improved accuracy and reduced time to train
Scale out for large minibatch SGD: Residual network training on ImageNet-1K with improved accuracy and reduced time to train
V. Codreanu
Damian Podareanu
V. Saletore
39
55
0
12 Nov 2017
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
312
2,896
0
15 Sep 2016
1