22
7

LpL^p sampling numbers for the Fourier-analytic Barron space

Abstract

In this paper, we consider Barron functions f:[0,1]dRf : [0,1]^d \to \mathbb{R} of smoothness σ>0\sigma > 0, which are functions that can be written as \[ f(x) = \int_{\mathbb{R}^d} F(\xi) \, e^{2 \pi i \langle x, \xi \rangle} \, d \xi \quad \text{with} \quad \int_{\mathbb{R}^d} |F(\xi)| \cdot (1 + |\xi|)^{\sigma} \, d \xi < \infty. \] For σ=1\sigma = 1, these functions play a prominent role in machine learning, since they can be efficiently approximated by (shallow) neural networks without suffering from the curse of dimensionality. For these functions, we study the following question: Given mm point samples f(x1),,f(xm)f(x_1),\dots,f(x_m) of an unknown Barron function f:[0,1]dRf : [0,1]^d \to \mathbb{R} of smoothness σ\sigma, how well can ff be recovered from these samples, for an optimal choice of the sampling points and the reconstruction procedure? Denoting the optimal reconstruction error measured in LpL^p by sm(σ;Lp)s_m (\sigma; L^p), we show that \[ m^{- \frac{1}{\max \{ p,2 \}} - \frac{\sigma}{d}} \lesssim s_m(\sigma;L^p) \lesssim (\ln (e + m))^{\alpha(\sigma,d) / p} \cdot m^{- \frac{1}{\max \{ p,2 \}} - \frac{\sigma}{d}} , \] where the implied constants only depend on σ\sigma and dd and where α(σ,d)\alpha(\sigma,d) stays bounded as dd \to \infty.

View on arXiv
Comments on this paper