ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1903.07120
18
8

Stabilize Deep ResNet with A Sharp Scaling Factor τττ

17 March 2019
Huishuai Zhang
Da Yu
Mingyang Yi
Wei Chen
Tie-Yan Liu
ArXivPDFHTML
Abstract

We study the stability and convergence of training deep ResNets with gradient descent. Specifically, we show that the parametric branch in the residual block should be scaled down by a factor τ=O(1/L)\tau =O(1/\sqrt{L})τ=O(1/L​) to guarantee stable forward/backward process, where LLL is the number of residual blocks. Moreover, we establish a converse result that the forward process is unbounded when τ>L−12+c\tau>L^{-\frac{1}{2}+c}τ>L−21​+c, for any positive constant ccc. The above two results together establish a sharp value of the scaling factor in determining the stability of deep ResNet. Based on the stability result, we further show that gradient descent finds the global minima if the ResNet is properly over-parameterized, which significantly improves over the previous work with a much larger range of τ\tauτ that admits global convergence. Moreover, we show that the convergence rate is independent of the depth, theoretically justifying the advantage of ResNet over vanilla feedforward network. Empirically, with such a factor τ\tauτ, one can train deep ResNet without normalization layer. Moreover, for ResNets with normalization layer, adding such a factor τ\tauτ also stabilizes the training and obtains significant performance gain for deep ResNet.

View on arXiv
Comments on this paper