ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1709.05289
76
470

Optimal approximation of piecewise smooth functions using deep ReLU neural networks

15 September 2017
P. Petersen
Felix Voigtländer
ArXivPDFHTML
Abstract

We study the necessary and sufficient complexity of ReLU neural networks---in terms of depth and number of weights---which is required for approximating classifier functions in L2L^2L2. As a model class, we consider the set Eβ(Rd)\mathcal{E}^\beta (\mathbb R^d)Eβ(Rd) of possibly discontinuous piecewise CβC^\betaCβ functions f:[−1/2,1/2]d→Rf : [-1/2, 1/2]^d \to \mathbb Rf:[−1/2,1/2]d→R, where the different smooth regions of fff are separated by CβC^\betaCβ hypersurfaces. For dimension d≥2d \geq 2d≥2, regularity β>0\beta > 0β>0, and accuracy ε>0\varepsilon > 0ε>0, we construct artificial neural networks with ReLU activation function that approximate functions from Eβ(Rd)\mathcal{E}^\beta(\mathbb R^d)Eβ(Rd) up to L2L^2L2 error of ε\varepsilonε. The constructed networks have a fixed number of layers, depending only on ddd and β\betaβ, and they have O(ε−2(d−1)/β)O(\varepsilon^{-2(d-1)/\beta})O(ε−2(d−1)/β) many nonzero weights, which we prove to be optimal. In addition to the optimality in terms of the number of weights, we show that in order to achieve the optimal approximation rate, one needs ReLU networks of a certain depth. Precisely, for piecewise Cβ(Rd)C^\beta(\mathbb R^d)Cβ(Rd) functions, this minimal depth is given---up to a multiplicative constant---by β/d\beta/dβ/d. Up to a log factor, our constructed networks match this bound. This partly explains the benefits of depth for ReLU networks by showing that deep networks are necessary to achieve efficient approximation of (piecewise) smooth functions. Finally, we analyze approximation in high-dimensional spaces where the function fff to be approximated can be factorized into a smooth dimension reducing feature map τ\tauτ and classifier function ggg---defined on a low-dimensional feature space---as f=g∘τf = g \circ \tauf=g∘τ. We show that in this case the approximation rate depends only on the dimension of the feature space and not the input dimension.

View on arXiv
Comments on this paper