ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2101.12365
14
87

Sharp Bounds on the Approximation Rates, Metric Entropy, and nnn-widths of Shallow Neural Networks

29 January 2021
Jonathan W. Siegel
Jinchao Xu
ArXivPDFHTML
Abstract

In this article, we study approximation properties of the variation spaces corresponding to shallow neural networks with a variety of activation functions. We introduce two main tools for estimating the metric entropy, approximation rates, and nnn-widths of these spaces. First, we introduce the notion of a smoothly parameterized dictionary and give upper bounds on the non-linear approximation rates, metric entropy and nnn-widths of their absolute convex hull. The upper bounds depend upon the order of smoothness of the parameterization. This result is applied to dictionaries of ridge functions corresponding to shallow neural networks, and they improve upon existing results in many cases. Next, we provide a method for lower bounding the metric entropy and nnn-widths of variation spaces which contain certain classes of ridge functions. This result gives sharp lower bounds on the L2L^2L2-approximation rates, metric entropy, and nnn-widths for variation spaces corresponding to neural networks with a range of important activation functions, including ReLUk^kk activation functions and sigmoidal activation functions with bounded variation.

View on arXiv
Comments on this paper