ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.07254
28
0
v1v2 (latest)

A Stable Whitening Optimizer for Efficient Neural Network Training

8 June 2025
Kevin Frans
Sergey Levine
Pieter Abbeel
ArXiv (abs)PDFHTML
Main:9 Pages
8 Figures
Bibliography:4 Pages
2 Tables
Appendix:5 Pages
Abstract

In this work, we take an experimentally grounded look at neural network optimization. Building on the Shampoo family of algorithms, we identify and alleviate three key issues, resulting in the proposed SPlus method. First, we find that naive Shampoo is prone to divergence when matrix-inverses are cached for long periods. We introduce an alternate bounded update combining a historical eigenbasis with instantaneous normalization, resulting in across-the-board stability and significantly lower computational requirements. Second, we adapt a shape-aware scaling to enable learning rate transfer across network width. Third, we find that high learning rates result in large parameter noise, and propose a simple iterate-averaging scheme which unblocks faster learning. To properly confirm these findings, we introduce a pointed Transformer training benchmark, considering three objectives (language modelling, image classification, and diffusion modelling) across different stages of training. On average, SPlus is able to reach the validation performance of Adam within 44% of the gradient steps and 62% of the wallclock time.

View on arXiv
@article{frans2025_2506.07254,
  title={ A Stable Whitening Optimizer for Efficient Neural Network Training },
  author={ Kevin Frans and Sergey Levine and Pieter Abbeel },
  journal={arXiv preprint arXiv:2506.07254},
  year={ 2025 }
}
Comments on this paper