ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.05317
18
6

Intrinsic dimensionality and generalization properties of the R\mathcal{R}R-norm inductive bias

10 June 2022
Navid Ardeshir
Daniel J. Hsu
Clayton Sanford
    CML
    AI4CE
ArXivPDFHTML
Abstract

We study the structural and statistical properties of R\mathcal{R}R-norm minimizing interpolants of datasets labeled by specific target functions. The R\mathcal{R}R-norm is the basis of an inductive bias for two-layer neural networks, recently introduced to capture the functional effect of controlling the size of network weights, independently of the network width. We find that these interpolants are intrinsically multivariate functions, even when there are ridge functions that fit the data, and also that the R\mathcal{R}R-norm inductive bias is not sufficient for achieving statistically optimal generalization for certain learning problems. Altogether, these results shed new light on an inductive bias that is connected to practical neural network training.

View on arXiv
Comments on this paper