ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1812.04352
  4. Cited By
Layer-Parallel Training of Deep Residual Neural Networks

Layer-Parallel Training of Deep Residual Neural Networks

11 December 2018
Stefanie Günther
Lars Ruthotto
J. Schroder
E. Cyr
N. Gauger
ArXivPDFHTML

Papers citing "Layer-Parallel Training of Deep Residual Neural Networks"

17 / 17 papers shown
Title
Rethinking the Relationship between Recurrent and Non-Recurrent Neural
  Networks: A Study in Sparsity
Rethinking the Relationship between Recurrent and Non-Recurrent Neural Networks: A Study in Sparsity
Quincy Hershey
Randy Paffenroth
Harsh Nilesh Pathak
Simon Tavener
71
1
0
01 Apr 2024
Machine learning and domain decomposition methods -- a survey
Machine learning and domain decomposition methods -- a survey
A. Klawonn
M. Lanser
J. Weber
AI4CE
24
7
0
21 Dec 2023
Multilevel-in-Layer Training for Deep Neural Network Regression
Multilevel-in-Layer Training for Deep Neural Network Regression
Colin Ponce
Ruipeng Li
Christina Mao
P. Vassilevski
AI4CE
19
1
0
11 Nov 2022
The phase unwrapping of under-sampled interferograms using radial basis
  function neural networks
The phase unwrapping of under-sampled interferograms using radial basis function neural networks
P. Gourdain
Aidan Bachmann
11
0
0
19 Oct 2022
TO-FLOW: Efficient Continuous Normalizing Flows with Temporal
  Optimization adjoint with Moving Speed
TO-FLOW: Efficient Continuous Normalizing Flows with Temporal Optimization adjoint with Moving Speed
Shian Du
Yihong Luo
Wei Chen
Jian Xu
Delu Zeng
32
7
0
19 Mar 2022
Parallel Training of GRU Networks with a Multi-Grid Solver for Long
  Sequences
Parallel Training of GRU Networks with a Multi-Grid Solver for Long Sequences
G. Moon
E. Cyr
25
5
0
07 Mar 2022
Quantized Convolutional Neural Networks Through the Lens of Partial
  Differential Equations
Quantized Convolutional Neural Networks Through the Lens of Partial Differential Equations
Ido Ben-Yair
Gil Ben Shalom
Moshe Eliasof
Eran Treister
MQ
36
5
0
31 Aug 2021
ResIST: Layer-Wise Decomposition of ResNets for Distributed Training
ResIST: Layer-Wise Decomposition of ResNets for Distributed Training
Chen Dun
Cameron R. Wolfe
C. Jermaine
Anastasios Kyrillidis
21
21
0
02 Jul 2021
Parareal Neural Networks Emulating a Parallel-in-time Algorithm
Parareal Neural Networks Emulating a Parallel-in-time Algorithm
Zhanyu Ma
Jiyang Xie
Jingyi Yu
AI4CE
33
9
0
16 Mar 2021
GIST: Distributed Training for Large-Scale Graph Convolutional Networks
GIST: Distributed Training for Large-Scale Graph Convolutional Networks
Cameron R. Wolfe
Jingkang Yang
Arindam Chowdhury
Chen Dun
Artun Bayer
Santiago Segarra
Anastasios Kyrillidis
BDL
GNN
LRM
54
9
0
20 Feb 2021
Parallel Blockwise Knowledge Distillation for Deep Neural Network
  Compression
Parallel Blockwise Knowledge Distillation for Deep Neural Network Compression
Cody Blakeney
Xiaomin Li
Yan Yan
Ziliang Zong
48
39
0
05 Dec 2020
A Differential Game Theoretic Neural Optimizer for Training Residual
  Networks
A Differential Game Theoretic Neural Optimizer for Training Residual Networks
Guan-Horng Liu
T. Chen
Evangelos A. Theodorou
24
2
0
17 Jul 2020
Layer-Parallel Training with GPU Concurrency of Deep Residual Neural
  Networks via Nonlinear Multigrid
Layer-Parallel Training with GPU Concurrency of Deep Residual Neural Networks via Nonlinear Multigrid
Andrew Kirby
S. Samsi
Michael Jones
Albert Reuther
J. Kepner
V. Gadepally
25
12
0
14 Jul 2020
Discretize-Optimize vs. Optimize-Discretize for Time-Series Regression
  and Continuous Normalizing Flows
Discretize-Optimize vs. Optimize-Discretize for Time-Series Regression and Continuous Normalizing Flows
Derek Onken
Lars Ruthotto
BDL
32
52
0
27 May 2020
Fractional Deep Neural Network via Constrained Optimization
Fractional Deep Neural Network via Constrained Optimization
Harbir Antil
R. Khatri
R. Löhner
Deepanshu Verma
30
29
0
01 Apr 2020
Multilevel Initialization for Layer-Parallel Deep Neural Network
  Training
Multilevel Initialization for Layer-Parallel Deep Neural Network Training
E. Cyr
Stefanie Günther
J. Schroder
AI4CE
22
11
0
19 Dec 2019
Distributed Training of Deep Neural Networks: Theoretical and Practical
  Limits of Parallel Scalability
Distributed Training of Deep Neural Networks: Theoretical and Practical Limits of Parallel Scalability
J. Keuper
Franz-Josef Pfreundt
GNN
55
97
0
22 Sep 2016
1