ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.09979
  4. Cited By
The Emergence of Spectral Universality in Deep Networks

The Emergence of Spectral Universality in Deep Networks

27 February 2018
Jeffrey Pennington
S. Schoenholz
Surya Ganguli
ArXivPDFHTML

Papers citing "The Emergence of Spectral Universality in Deep Networks"

33 / 33 papers shown
Title
Neural Redshift: Random Networks are not Random Functions
Neural Redshift: Random Networks are not Random Functions
Damien Teney
A. Nicolicioiu
Valentin Hartmann
Ehsan Abbasnejad
103
18
0
04 Mar 2024
Quantitative CLTs in Deep Neural Networks
Quantitative CLTs in Deep Neural Networks
Stefano Favaro
Boris Hanin
Domenico Marinucci
I. Nourdin
G. Peccati
BDL
28
11
0
12 Jul 2023
NTK-SAP: Improving neural network pruning by aligning training dynamics
NTK-SAP: Improving neural network pruning by aligning training dynamics
Yite Wang
Dawei Li
Ruoyu Sun
34
19
0
06 Apr 2023
On the Initialisation of Wide Low-Rank Feedforward Neural Networks
On the Initialisation of Wide Low-Rank Feedforward Neural Networks
Thiziri Nait Saada
Jared Tanner
13
1
0
31 Jan 2023
Expected Gradients of Maxout Networks and Consequences to Parameter
  Initialization
Expected Gradients of Maxout Networks and Consequences to Parameter Initialization
Hanna Tseran
Guido Montúfar
ODL
27
0
0
17 Jan 2023
Statistical Physics of Deep Neural Networks: Initialization toward
  Optimal Channels
Statistical Physics of Deep Neural Networks: Initialization toward Optimal Channels
Kangyu Weng
Aohua Cheng
Ziyang Zhang
Pei Sun
Yang Tian
50
2
0
04 Dec 2022
On skip connections and normalisation layers in deep optimisation
On skip connections and normalisation layers in deep optimisation
L. MacDonald
Jack Valmadre
Hemanth Saratchandran
Simon Lucey
ODL
19
1
0
10 Oct 2022
Dynamical Isometry for Residual Networks
Dynamical Isometry for Residual Networks
Advait Gadhikar
R. Burkholz
ODL
AI4CE
40
2
0
05 Oct 2022
AutoInit: Automatic Initialization via Jacobian Tuning
AutoInit: Automatic Initialization via Jacobian Tuning
Tianyu He
Darshil Doshi
Andrey Gromov
8
4
0
27 Jun 2022
Universal characteristics of deep neural network loss surfaces from
  random matrix theory
Universal characteristics of deep neural network loss surfaces from random matrix theory
Nicholas P. Baskerville
J. Keating
F. Mezzadri
J. Najnudel
Diego Granziol
24
4
0
17 May 2022
Random matrix analysis of deep neural network weight matrices
Random matrix analysis of deep neural network weight matrices
M. Thamm
Max Staats
B. Rosenow
29
12
0
28 Mar 2022
Extended critical regimes of deep neural networks
Extended critical regimes of deep neural networks
Chengqing Qu
Asem Wardak
P. Gong
AI4CE
24
1
0
24 Mar 2022
Lottery Tickets with Nonzero Biases
Lottery Tickets with Nonzero Biases
Jonas Fischer
Advait Gadhikar
R. Burkholz
16
6
0
21 Oct 2021
Clipped Hyperbolic Classifiers Are Super-Hyperbolic Classifiers
Clipped Hyperbolic Classifiers Are Super-Hyperbolic Classifiers
Yunhui Guo
Xudong Wang
Yubei Chen
Stella X. Yu
20
45
0
23 Jul 2021
Random Neural Networks in the Infinite Width Limit as Gaussian Processes
Random Neural Networks in the Infinite Width Limit as Gaussian Processes
Boris Hanin
BDL
26
43
0
04 Jul 2021
Activation function design for deep networks: linearity and effective
  initialisation
Activation function design for deep networks: linearity and effective initialisation
Michael Murray
V. Abrol
Jared Tanner
ODL
LLMSV
29
18
0
17 May 2021
Asymptotic Freeness of Layerwise Jacobians Caused by Invariance of
  Multilayer Perceptron: The Haar Orthogonal Case
Asymptotic Freeness of Layerwise Jacobians Caused by Invariance of Multilayer Perceptron: The Haar Orthogonal Case
B. Collins
Tomohiro Hayase
22
7
0
24 Mar 2021
Understanding self-supervised Learning Dynamics without Contrastive
  Pairs
Understanding self-supervised Learning Dynamics without Contrastive Pairs
Yuandong Tian
Xinlei Chen
Surya Ganguli
SSL
138
281
0
12 Feb 2021
Tight Bounds on the Smallest Eigenvalue of the Neural Tangent Kernel for
  Deep ReLU Networks
Tight Bounds on the Smallest Eigenvalue of the Neural Tangent Kernel for Deep ReLU Networks
Quynh N. Nguyen
Marco Mondelli
Guido Montúfar
25
81
0
21 Dec 2020
On Random Matrices Arising in Deep Neural Networks: General I.I.D. Case
On Random Matrices Arising in Deep Neural Networks: General I.I.D. Case
L. Pastur
V. Slavin
CML
24
12
0
20 Nov 2020
Tensor Programs III: Neural Matrix Laws
Tensor Programs III: Neural Matrix Laws
Greg Yang
6
43
0
22 Sep 2020
Deep Isometric Learning for Visual Recognition
Deep Isometric Learning for Visual Recognition
Haozhi Qi
Chong You
Xinyu Wang
Yi Ma
Jitendra Malik
VLM
30
53
0
30 Jun 2020
Associative Memory in Iterated Overparameterized Sigmoid Autoencoders
Associative Memory in Iterated Overparameterized Sigmoid Autoencoders
Yibo Jiang
C. Pehlevan
19
13
0
30 Jun 2020
Neural Anisotropy Directions
Neural Anisotropy Directions
Guillermo Ortiz-Jiménez
Apostolos Modas
Seyed-Mohsen Moosavi-Dezfooli
P. Frossard
28
16
0
17 Jun 2020
The Spectrum of Fisher Information of Deep Networks Achieving Dynamical
  Isometry
The Spectrum of Fisher Information of Deep Networks Achieving Dynamical Isometry
Tomohiro Hayase
Ryo Karakida
29
7
0
14 Jun 2020
A Comprehensive and Modularized Statistical Framework for Gradient Norm
  Equality in Deep Neural Networks
A Comprehensive and Modularized Statistical Framework for Gradient Norm Equality in Deep Neural Networks
Zhaodong Chen
Lei Deng
Bangyan Wang
Guoqi Li
Yuan Xie
35
28
0
01 Jan 2020
Mean field theory for deep dropout networks: digging up gradient
  backpropagation deeply
Mean field theory for deep dropout networks: digging up gradient backpropagation deeply
Wei Huang
R. Xu
Weitao Du
Yutian Zeng
Yunce Zhao
12
6
0
19 Dec 2019
Optimization for deep learning: theory and algorithms
Optimization for deep learning: theory and algorithms
Ruoyu Sun
ODL
14
168
0
19 Dec 2019
Optimal Machine Intelligence at the Edge of Chaos
Optimal Machine Intelligence at the Edge of Chaos
Ling Feng
Lin Zhang
C. Lai
25
8
0
11 Sep 2019
The Normalization Method for Alleviating Pathological Sharpness in Wide
  Neural Networks
The Normalization Method for Alleviating Pathological Sharpness in Wide Neural Networks
Ryo Karakida
S. Akaho
S. Amari
21
39
0
07 Jun 2019
Residual Networks as Nonlinear Systems: Stability Analysis using
  Linearization
Residual Networks as Nonlinear Systems: Stability Analysis using Linearization
Kai Rothauge
Z. Yao
Zixi Hu
Michael W. Mahoney
13
2
0
31 May 2019
Dynamical Isometry and a Mean Field Theory of CNNs: How to Train
  10,000-Layer Vanilla Convolutional Neural Networks
Dynamical Isometry and a Mean Field Theory of CNNs: How to Train 10,000-Layer Vanilla Convolutional Neural Networks
Lechao Xiao
Yasaman Bahri
Jascha Narain Sohl-Dickstein
S. Schoenholz
Jeffrey Pennington
227
348
0
14 Jun 2018
Universal Statistics of Fisher Information in Deep Neural Networks: Mean
  Field Approach
Universal Statistics of Fisher Information in Deep Neural Networks: Mean Field Approach
Ryo Karakida
S. Akaho
S. Amari
FedML
47
140
0
04 Jun 2018
1