ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2107.07572
  4. Cited By
Globally Convergent Multilevel Training of Deep Residual Networks

Globally Convergent Multilevel Training of Deep Residual Networks

15 July 2021
Alena Kopanicáková
Rolf Krause
ArXivPDFHTML

Papers citing "Globally Convergent Multilevel Training of Deep Residual Networks"

15 / 15 papers shown
Title
Large scale simulation of pressure induced phase-field fracture
  propagation using Utopia
Large scale simulation of pressure induced phase-field fracture propagation using Utopia
Patrick Zulian
Alena Kopanicáková
M. Nestola
A. Fink
N. Fadel
J. VandeVondele
Rolf Krause
AI4CE
18
10
0
25 Jul 2020
Layer-Parallel Training with GPU Concurrency of Deep Residual Neural
  Networks via Nonlinear Multigrid
Layer-Parallel Training with GPU Concurrency of Deep Residual Neural Networks via Nonlinear Multigrid
Andrew Kirby
S. Samsi
Michael Jones
Albert Reuther
J. Kepner
V. Gadepally
57
12
0
14 Jul 2020
Dissecting Neural ODEs
Dissecting Neural ODEs
Stefano Massaroli
Michael Poli
Jinkyoo Park
Atsushi Yamashita
Hajime Asama
88
203
0
19 Feb 2020
Multilevel Initialization for Layer-Parallel Deep Neural Network
  Training
Multilevel Initialization for Layer-Parallel Deep Neural Network Training
E. Cyr
Stefanie Günther
J. Schroder
AI4CE
41
11
0
19 Dec 2019
Layer-Parallel Training of Deep Residual Neural Networks
Layer-Parallel Training of Deep Residual Neural Networks
Stefanie Günther
Lars Ruthotto
J. Schroder
E. Cyr
N. Gauger
48
90
0
11 Dec 2018
On the Computational Inefficiency of Large Batch Sizes for Stochastic
  Gradient Descent
On the Computational Inefficiency of Large Batch Sizes for Stochastic Gradient Descent
Noah Golmant
N. Vemuri
Z. Yao
Vladimir Feinberg
A. Gholami
Kai Rothauge
Michael W. Mahoney
Joseph E. Gonzalez
70
73
0
30 Nov 2018
Measuring the Effects of Data Parallelism on Neural Network Training
Measuring the Effects of Data Parallelism on Neural Network Training
Christopher J. Shallue
Jaehoon Lee
J. Antognini
J. Mamou
J. Ketterling
Yao Wang
80
410
0
08 Nov 2018
Improving Generalization Performance by Switching from Adam to SGD
Improving Generalization Performance by Switching from Adam to SGD
N. Keskar
R. Socher
ODL
88
523
0
20 Dec 2017
Multi-level Residual Networks from Dynamical Systems View
Multi-level Residual Networks from Dynamical Systems View
B. Chang
Lili Meng
E. Haber
Frederick Tung
David Begert
74
172
0
27 Oct 2017
Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning
  Algorithms
Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms
Han Xiao
Kashif Rasul
Roland Vollgraf
266
8,876
0
25 Aug 2017
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
419
2,936
0
15 Sep 2016
DeepLab: Semantic Image Segmentation with Deep Convolutional Nets,
  Atrous Convolution, and Fully Connected CRFs
DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs
Liang-Chieh Chen
George Papandreou
Iasonas Kokkinos
Kevin Patrick Murphy
Alan Yuille
SSeg
239
18,224
0
02 Jun 2016
Deep Networks with Stochastic Depth
Deep Networks with Stochastic Depth
Gao Huang
Yu Sun
Zhuang Liu
Daniel Sedra
Kilian Q. Weinberger
209
2,356
0
30 Mar 2016
Identity Mappings in Deep Residual Networks
Identity Mappings in Deep Residual Networks
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
350
10,180
0
16 Mar 2016
A Stochastic Quasi-Newton Method for Large-Scale Optimization
A Stochastic Quasi-Newton Method for Large-Scale Optimization
R. Byrd
Samantha Hansen
J. Nocedal
Y. Singer
ODL
105
471
0
27 Jan 2014
1