ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1812.05994
  4. Cited By
Products of Many Large Random Matrices and Gradients in Deep Neural
  Networks

Products of Many Large Random Matrices and Gradients in Deep Neural Networks

14 December 2018
Boris Hanin
Mihai Nica
ArXivPDFHTML

Papers citing "Products of Many Large Random Matrices and Gradients in Deep Neural Networks"

5 / 5 papers shown
Title
Deep Neural Nets as Hamiltonians
Deep Neural Nets as Hamiltonians
Mike Winer
Boris Hanin
357
0
0
31 Mar 2025
A Convergence Theory for Deep Learning via Over-Parameterization
A Convergence Theory for Deep Learning via Over-Parameterization
Zeyuan Allen-Zhu
Yuanzhi Li
Zhao Song
AI4CE
ODL
199
1,457
0
09 Nov 2018
The Emergence of Spectral Universality in Deep Networks
The Emergence of Spectral Universality in Deep Networks
Jeffrey Pennington
S. Schoenholz
Surya Ganguli
46
171
0
27 Feb 2018
Which Neural Net Architectures Give Rise To Exploding and Vanishing
  Gradients?
Which Neural Net Architectures Give Rise To Exploding and Vanishing Gradients?
Boris Hanin
46
255
0
11 Jan 2018
Resurrecting the sigmoid in deep learning through dynamical isometry:
  theory and practice
Resurrecting the sigmoid in deep learning through dynamical isometry: theory and practice
Jeffrey Pennington
S. Schoenholz
Surya Ganguli
ODL
34
251
0
13 Nov 2017
1