ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1906.02353
  4. Cited By
Efficient Subsampled Gauss-Newton and Natural Gradient Methods for
  Training Neural Networks

Efficient Subsampled Gauss-Newton and Natural Gradient Methods for Training Neural Networks

5 June 2019
Yi Ren
D. Goldfarb
ArXivPDFHTML

Papers citing "Efficient Subsampled Gauss-Newton and Natural Gradient Methods for Training Neural Networks"

9 / 9 papers shown
Title
Position: Curvature Matrices Should Be Democratized via Linear Operators
Position: Curvature Matrices Should Be Democratized via Linear Operators
Felix Dangel
Runa Eschenhagen
Weronika Ormaniec
Andres Fernandez
Lukas Tatzel
Agustinus Kristiadi
63
3
0
31 Jan 2025
Theoretical characterisation of the Gauss-Newton conditioning in Neural Networks
Theoretical characterisation of the Gauss-Newton conditioning in Neural Networks
Jim Zhao
Sidak Pal Singh
Aurelien Lucchi
AI4CE
48
0
0
04 Nov 2024
An Improved Empirical Fisher Approximation for Natural Gradient Descent
An Improved Empirical Fisher Approximation for Natural Gradient Descent
Xiaodong Wu
Wenyi Yu
Chao Zhang
Philip Woodland
31
3
0
10 Jun 2024
ASDL: A Unified Interface for Gradient Preconditioning in PyTorch
ASDL: A Unified Interface for Gradient Preconditioning in PyTorch
Kazuki Osawa
Satoki Ishikawa
Rio Yokota
Shigang Li
Torsten Hoefler
ODL
46
14
0
08 May 2023
Achieving High Accuracy with PINNs via Energy Natural Gradients
Achieving High Accuracy with PINNs via Energy Natural Gradients
Johannes Müller
Marius Zeinhofer
13
5
0
25 Feb 2023
Improving Levenberg-Marquardt Algorithm for Neural Networks
Improving Levenberg-Marquardt Algorithm for Neural Networks
Omead Brandon Pooladzandi
Yiming Zhou
ODL
31
2
0
17 Dec 2022
Gradient Descent on Neurons and its Link to Approximate Second-Order
  Optimization
Gradient Descent on Neurons and its Link to Approximate Second-Order Optimization
Frederik Benzing
ODL
45
23
0
28 Jan 2022
TENGraD: Time-Efficient Natural Gradient Descent with Exact Fisher-Block
  Inversion
TENGraD: Time-Efficient Natural Gradient Descent with Exact Fisher-Block Inversion
Saeed Soori
Bugra Can
Baourun Mu
Mert Gurbuzbalaban
M. Dehnavi
24
10
0
07 Jun 2021
Gram-Gauss-Newton Method: Learning Overparameterized Neural Networks for
  Regression Problems
Gram-Gauss-Newton Method: Learning Overparameterized Neural Networks for Regression Problems
Tianle Cai
Ruiqi Gao
Jikai Hou
Siyu Chen
Dong Wang
Di He
Zhihua Zhang
Liwei Wang
ODL
21
57
0
28 May 2019
1