Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2010.00879
Cited By
Understanding Approximate Fisher Information for Fast Convergence of Natural Gradient Descent in Wide Neural Networks
2 October 2020
Ryo Karakida
Kazuki Osawa
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Understanding Approximate Fisher Information for Fast Convergence of Natural Gradient Descent in Wide Neural Networks"
7 / 7 papers shown
Title
Modify Training Directions in Function Space to Reduce Generalization Error
Yi Yu
Wenlian Lu
Boyu Chen
24
0
0
25 Jul 2023
The Geometry of Neural Nets' Parameter Spaces Under Reparametrization
Agustinus Kristiadi
Felix Dangel
Philipp Hennig
32
11
0
14 Feb 2023
Component-Wise Natural Gradient Descent -- An Efficient Neural Network Optimization
Tran van Sang
Mhd Irvan
R. Yamaguchi
Toshiyuki Nakata
13
1
0
11 Oct 2022
SCORE: Approximating Curvature Information under Self-Concordant Regularization
Adeyemi Damilare Adeoye
Alberto Bemporad
20
4
0
14 Dec 2021
Nys-Newton: Nyström-Approximated Curvature for Stochastic Optimization
Dinesh Singh
Hardik Tankaria
M. Yamada
ODL
42
2
0
16 Oct 2021
TENGraD: Time-Efficient Natural Gradient Descent with Exact Fisher-Block Inversion
Saeed Soori
Bugra Can
Baourun Mu
Mert Gurbuzbalaban
M. Dehnavi
24
10
0
07 Jun 2021
Dynamical Isometry and a Mean Field Theory of CNNs: How to Train 10,000-Layer Vanilla Convolutional Neural Networks
Lechao Xiao
Yasaman Bahri
Jascha Narain Sohl-Dickstein
S. Schoenholz
Jeffrey Pennington
220
348
0
14 Jun 2018
1