ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1809.08848
  4. Cited By
Dynamical Isometry is Achieved in Residual Networks in a Universal Way
  for any Activation Function
v1v2v3 (latest)

Dynamical Isometry is Achieved in Residual Networks in a Universal Way for any Activation Function

24 September 2018
W. Tarnowski
P. Warchol
Stanislaw Jastrzebski
Jacek Tabor
M. Nowak
ArXiv (abs)PDFHTML

Papers citing "Dynamical Isometry is Achieved in Residual Networks in a Universal Way for any Activation Function"

19 / 19 papers shown
Title
Spectrum concentration in deep residual learning: a free probability
  approach
Spectrum concentration in deep residual learning: a free probability approach
Zenan Ling
Xing He
Robert C. Qiu
37
21
0
31 Jul 2018
Dynamical Isometry and a Mean Field Theory of CNNs: How to Train
  10,000-Layer Vanilla Convolutional Neural Networks
Dynamical Isometry and a Mean Field Theory of CNNs: How to Train 10,000-Layer Vanilla Convolutional Neural Networks
Lechao Xiao
Yasaman Bahri
Jascha Narain Sohl-Dickstein
S. Schoenholz
Jeffrey Pennington
304
354
0
14 Jun 2018
The Dynamics of Learning: A Random Matrix Approach
The Dynamics of Learning: A Random Matrix Approach
Zhenyu Liao
Romain Couillet
AI4CE
57
43
0
30 May 2018
How to Start Training: The Effect of Initialization and Architecture
How to Start Training: The Effect of Initialization and Architecture
Boris Hanin
David Rolnick
82
255
0
05 Mar 2018
The Emergence of Spectral Universality in Deep Networks
The Emergence of Spectral Universality in Deep Networks
Jeffrey Pennington
S. Schoenholz
Surya Ganguli
64
173
0
27 Feb 2018
Mean Field Residual Networks: On the Edge of Chaos
Mean Field Residual Networks: On the Edge of Chaos
Greg Yang
S. Schoenholz
71
194
0
24 Dec 2017
Resurrecting the sigmoid in deep learning through dynamical isometry:
  theory and practice
Resurrecting the sigmoid in deep learning through dynamical isometry: theory and practice
Jeffrey Pennington
S. Schoenholz
Surya Ganguli
ODL
43
254
0
13 Nov 2017
Deep Residual Networks and Weight Initialization
Deep Residual Networks and Weight Initialization
Masato Taki
ODL
46
24
0
09 Sep 2017
Self-Normalizing Neural Networks
Self-Normalizing Neural Networks
Günter Klambauer
Thomas Unterthiner
Andreas Mayr
Sepp Hochreiter
470
2,519
0
08 Jun 2017
The Shattered Gradients Problem: If resnets are the answer, then what is
  the question?
The Shattered Gradients Problem: If resnets are the answer, then what is the question?
David Balduzzi
Marcus Frean
Lennox Leary
J. P. Lewis
Kurt Wan-Duo Ma
Brian McWilliams
ODL
73
406
0
28 Feb 2017
A Random Matrix Approach to Neural Networks
A Random Matrix Approach to Neural Networks
Cosme Louart
Zhenyu Liao
Romain Couillet
68
161
0
17 Feb 2017
Exponential expressivity in deep neural networks through transient chaos
Exponential expressivity in deep neural networks through transient chaos
Ben Poole
Subhaneil Lahiri
M. Raghu
Jascha Narain Sohl-Dickstein
Surya Ganguli
90
595
0
16 Jun 2016
An Analysis of Deep Neural Network Models for Practical Applications
An Analysis of Deep Neural Network Models for Practical Applications
A. Canziani
Adam Paszke
Eugenio Culurciello
90
1,167
0
24 May 2016
Identity Mappings in Deep Residual Networks
Identity Mappings in Deep Residual Networks
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
354
10,196
0
16 Mar 2016
Deep Residual Learning for Image Recognition
Deep Residual Learning for Image Recognition
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
MedIm
2.2K
194,426
0
10 Dec 2015
All you need is a good init
All you need is a good init
Dmytro Mishkin
Jirí Matas
ODL
94
612
0
19 Nov 2015
Batch Normalization: Accelerating Deep Network Training by Reducing
  Internal Covariate Shift
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Sergey Ioffe
Christian Szegedy
OOD
465
43,341
0
11 Feb 2015
The Loss Surfaces of Multilayer Networks
The Loss Surfaces of Multilayer Networks
A. Choromańska
Mikael Henaff
Michaël Mathieu
Gerard Ben Arous
Yann LeCun
ODL
263
1,200
0
30 Nov 2014
Exact solutions to the nonlinear dynamics of learning in deep linear
  neural networks
Exact solutions to the nonlinear dynamics of learning in deep linear neural networks
Andrew M. Saxe
James L. McClelland
Surya Ganguli
ODL
185
1,852
0
20 Dec 2013
1