ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1708.02691
34
354

Universal Function Approximation by Deep Neural Nets with Bounded Width and ReLU Activations

9 August 2017
Boris Hanin
ArXivPDFHTML
Abstract

This article concerns the expressive power of depth in neural nets with ReLU activations and bounded width. We are particularly interested in the following questions: what is the minimal width wmin(d)w_{\text{min}}(d)wmin​(d) so that ReLU nets of width wmin(d)w_{\text{min}}(d)wmin​(d) (and arbitrary depth) can approximate any continuous function on the unit cube [0,1]d[0,1]^d[0,1]d aribitrarily well? For ReLU nets near this minimal width, what can one say about the depth necessary to approximate a given function? Our approach to this paper is based on the observation that, due to the convexity of the ReLU activation, ReLU nets are particularly well-suited for representing convex functions. In particular, we prove that ReLU nets with width d+1d+1d+1 can approximate any continuous convex function of ddd variables arbitrarily well. These results then give quantitative depth estimates for the rate of approximation of any continuous scalar function on the ddd-dimensional cube [0,1]d[0,1]^d[0,1]d by ReLU nets with width d+3.d+3.d+3.

View on arXiv
Comments on this paper