ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2311.18718
18
4

The Feature Speed Formula: a flexible approach to scale hyper-parameters of deep neural networks

30 November 2023
Lénaic Chizat
Praneeth Netrapalli
ArXivPDFHTML
Abstract

Deep learning succeeds by doing hierarchical feature learning, yet tuning hyper-parameters (HP) such as initialization scales, learning rates etc., only give indirect control over this behavior. In this paper, we introduce a key notion to predict and control feature learning: the angle θℓ\theta_\ellθℓ​ between the feature updates and the backward pass (at layer index ℓ\ellℓ). We show that the magnitude of feature updates after one GD step, at any training time, can be expressed via a simple and general \emph{feature speed formula} in terms of this angle θℓ\theta_\ellθℓ​, the loss decay, and the magnitude of the backward pass. This angle θℓ\theta_\ellθℓ​ is controlled by the conditioning of the layer-to-layer Jacobians and at random initialization, it is determined by the spectrum of a certain kernel, which coincides with the Neural Tangent Kernel when ℓ=depth\ell=\text{depth}ℓ=depth. Given θℓ\theta_\ellθℓ​, the feature speed formula provides us with rules to adjust HPs (scales and learning rates) so as to satisfy certain dynamical properties, such as feature learning and loss decay. We investigate the implications of our approach for ReLU MLPs and ResNets in the large width-then-depth limit. Relying on prior work, we show that in ReLU MLPs with iid initialization, the angle degenerates with depth as cos⁡(θℓ)=Θ(1/ℓ)\cos(\theta_\ell)=\Theta(1/\sqrt{\ell})cos(θℓ​)=Θ(1/ℓ​). In contrast, ResNets with branch scale O(1/depth)O(1/\sqrt{\text{depth}})O(1/depth​) maintain a non-degenerate angle cos⁡(θℓ)=Θ(1)\cos(\theta_\ell)=\Theta(1)cos(θℓ​)=Θ(1). We use these insights to recover key properties of known HP scalings and also to introduce a new HP scaling for large depth ReLU MLPs with favorable theoretical properties.

View on arXiv
Comments on this paper