ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.02258
21
4

A Neural Scaling Law from Lottery Ticket Ensembling

3 October 2023
Ziming Liu
Max Tegmark
ArXivPDFHTML
Abstract

Neural scaling laws (NSL) refer to the phenomenon where model performance improves with scale. Sharma & Kaplan analyzed NSL using approximation theory and predict that MSE losses decay as N−αN^{-\alpha}N−α, α=4/d\alpha=4/dα=4/d, where NNN is the number of model parameters, and ddd is the intrinsic input dimension. Although their theory works well for some cases (e.g., ReLU networks), we surprisingly find that a simple 1D problem y=x2y=x^2y=x2 manifests a different scaling law (α=1\alpha=1α=1) from their predictions (α=4\alpha=4α=4). We opened the neural networks and found that the new scaling law originates from lottery ticket ensembling: a wider network on average has more "lottery tickets", which are ensembled to reduce the variance of outputs. We support the ensembling mechanism by mechanistically interpreting single neural networks, as well as studying them statistically. We attribute the N−1N^{-1}N−1 scaling law to the "central limit theorem" of lottery tickets. Finally, we discuss its potential implications for large language models and statistical physics-type theories of learning.

View on arXiv
Comments on this paper