ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1607.01434
83
70
v1v2v3v4 (latest)

Risk Bounds for High-dimensional Ridge Function Combinations Including Neural Networks

5 July 2016
Jason M. Klusowski
Andrew R. Barron
ArXiv (abs)PDFHTML
Abstract

Let f⋆ f^{\star} f⋆ be a function on Rd \mathbb{R}^d Rd satisfying a spectral norm condition. For various noise settings, we show that E∥f^−f⋆∥2≤vf⋆(log⁡dn)1/4 \mathbb{E}\|\hat{f} - f^{\star} \|^2 \leq v_{f^{\star}}\left(\frac{\log d}{n}\right)^{1/4} E∥f^​−f⋆∥2≤vf⋆​(nlogd​)1/4, where n n n is the sample size and f^ \hat{f} f^​ is either a penalized least squares estimator or a greedily obtained version of such using linear combinations of ramp, sinusoidal, sigmoidal or other bounded Lipschitz ridge functions. Our risk bound is effective even when the dimension d d d is much larger than the available sample size. For settings where the dimension is larger than the square root of the sample size this quantity is seen to improve the more familiar risk bound of vf⋆(dlog⁡(n/d)n)1/2 v_{f^{\star}}\left(\frac{d\log (n/d)}{n}\right)^{1/2} vf⋆​(ndlog(n/d)​)1/2, also investigated here.

View on arXiv
Comments on this paper