v1v2 (latest)
Gauss-Newton Natural Gradient Descent for Shape Learning
James King
Arturs Berzins
Siddhartha Mishra
Marius Zeinhofer
- ODL
Main:11 Pages
9 Figures
Bibliography:2 Pages
1 Tables
Appendix:3 Pages
Abstract
We explore the use of the Gauss-Newton method for optimization in shape learning, including implicit neural surfaces and geometry-informed neural networks. The method addresses key challenges in shape learning, such as the ill-conditioning of the underlying differential constraints and the mismatch between the optimization problem in parameter space and the function space where the problem is naturally posed. This leads to significantly faster and more stable convergence than standard first-order methods, while also requiring far fewer iterations. Experiments across benchmark shape optimization tasks demonstrate that the Gauss-Newton method consistently improves both training speed and final solution accuracy.
View on arXivComments on this paper
