ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1902.04942
41
3
v1v2 (latest)

Sample Variance Decay in Randomly Initialized ReLU Networks

13 February 2019
Kyle L. Luther
H. S. Seung
ArXiv (abs)PDFHTML
Abstract

Before training a neural net, a classic rule of thumb is to randomly initialize the weights so the variance of activations is preserved across layers. This is traditionally interpreted using the total variance due to randomness in both weights \emph{and} samples. Alternatively, one can interpret the rule of thumb as preservation of the variance over samples for a fixed network. The two interpretations differ little for a shallow net, but the difference is shown to grow with depth for a deep ReLU net by decomposing the total variance into the network-averaged sum of the sample variance and square of the sample mean. We demonstrate that even when the total variance is preserved, the sample variance decays in the later layers through an analytical calculation in the limit of infinite network width, and numerical simulations for finite width. We show that Batch Normalization eliminates this decay and provide empirical evidence that preserving the sample variance instead of only the total variance at initialization time can have an impact on the training dynamics of a deep network.

View on arXiv
Comments on this paper