ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.08466
33
15

Scaling Laws in Linear Regression: Compute, Parameters, and Data

12 June 2024
Licong Lin
Jingfeng Wu
Sham Kakade
Peter L. Bartlett
Jason D. Lee
    LRM
ArXivPDFHTML
Abstract

Empirically, large-scale deep learning models often satisfy a neural scaling law: the test error of the trained model improves polynomially as the model size and data size grow. However, conventional wisdom suggests the test error consists of approximation, bias, and variance errors, where the variance error increases with model size. This disagrees with the general form of neural scaling laws, which predict that increasing model size monotonically improves performance. We study the theory of scaling laws in an infinite dimensional linear regression setup. Specifically, we consider a model with MMM parameters as a linear function of sketched covariates. The model is trained by one-pass stochastic gradient descent (SGD) using NNN data. Assuming the optimal parameter satisfies a Gaussian prior and the data covariance matrix has a power-law spectrum of degree a>1a>1a>1, we show that the reducible part of the test error is Θ(M−(a−1)+N−(a−1)/a)\Theta(M^{-(a-1)} + N^{-(a-1)/a})Θ(M−(a−1)+N−(a−1)/a). The variance error, which increases with MMM, is dominated by the other errors due to the implicit regularization of SGD, thus disappearing from the bound. Our theory is consistent with the empirical neural scaling laws and verified by numerical simulation.

View on arXiv
Comments on this paper