Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1906.02353
Cited By
Efficient Subsampled Gauss-Newton and Natural Gradient Methods for Training Neural Networks
5 June 2019
Yi Ren
D. Goldfarb
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Efficient Subsampled Gauss-Newton and Natural Gradient Methods for Training Neural Networks"
9 / 9 papers shown
Title
Position: Curvature Matrices Should Be Democratized via Linear Operators
Felix Dangel
Runa Eschenhagen
Weronika Ormaniec
Andres Fernandez
Lukas Tatzel
Agustinus Kristiadi
63
3
0
31 Jan 2025
Theoretical characterisation of the Gauss-Newton conditioning in Neural Networks
Jim Zhao
Sidak Pal Singh
Aurelien Lucchi
AI4CE
48
0
0
04 Nov 2024
An Improved Empirical Fisher Approximation for Natural Gradient Descent
Xiaodong Wu
Wenyi Yu
Chao Zhang
Philip Woodland
31
3
0
10 Jun 2024
ASDL: A Unified Interface for Gradient Preconditioning in PyTorch
Kazuki Osawa
Satoki Ishikawa
Rio Yokota
Shigang Li
Torsten Hoefler
ODL
46
14
0
08 May 2023
Achieving High Accuracy with PINNs via Energy Natural Gradients
Johannes Müller
Marius Zeinhofer
13
5
0
25 Feb 2023
Improving Levenberg-Marquardt Algorithm for Neural Networks
Omead Brandon Pooladzandi
Yiming Zhou
ODL
31
2
0
17 Dec 2022
Gradient Descent on Neurons and its Link to Approximate Second-Order Optimization
Frederik Benzing
ODL
45
23
0
28 Jan 2022
TENGraD: Time-Efficient Natural Gradient Descent with Exact Fisher-Block Inversion
Saeed Soori
Bugra Can
Baourun Mu
Mert Gurbuzbalaban
M. Dehnavi
24
10
0
07 Jun 2021
Gram-Gauss-Newton Method: Learning Overparameterized Neural Networks for Regression Problems
Tianle Cai
Ruiqi Gao
Jikai Hou
Siyu Chen
Dong Wang
Di He
Zhihua Zhang
Liwei Wang
ODL
21
57
0
28 May 2019
1