32
69

Risk Bounds for High-dimensional Ridge Function Combinations Including Neural Networks

Abstract

Let f f^{\star} be a function on Rd \mathbb{R}^d with an assumption of a spectral norm vf v_{f^{\star}} . For various noise settings, we show that Ef^f2(vf4logdn)1/3 \mathbb{E}\|\hat{f} - f^{\star} \|^2 \leq \left(v^4_{f^{\star}}\frac{\log d}{n}\right)^{1/3} , where n n is the sample size and f^ \hat{f} is either a penalized least squares estimator or a greedily obtained version of such using linear combinations of sinusoidal, sigmoidal, ramp, ramp-squared or other smooth ridge functions. The candidate fits may be chosen from a continuum of functions, thus avoiding the rigidity of discretizations of the parameter space. On the other hand, if the candidate fits are chosen from a discretization, we show that Ef^f2(vf3logdn)2/5 \mathbb{E}\|\hat{f} - f^{\star} \|^2 \leq \left(v^3_{f^{\star}}\frac{\log d}{n}\right)^{2/5} . This work bridges non-linear and non-parametric function estimation and includes single-hidden layer nets. Unlike past theory for such settings, our bound shows that the risk is small even when the input dimension d d of an infinite-dimensional parameterized dictionary is much larger than the available sample size. When the dimension is larger than the cube root of the sample size, this quantity is seen to improve the more familiar risk bound of vf(dlog(n/d)n)1/2 v_{f^{\star}}\left(\frac{d\log (n/d)}{n}\right)^{1/2} , also investigated here.

View on arXiv
Comments on this paper