Optimal Approximation Rates and Metric Entropy of ReLU and Cosine Networks

This article addresses several fundamental issues associated with the approximation theory of neural networks, including the characterization of approximation spaces, the determination of the metric entropy of these spaces, and approximation rates of neural networks. For any activation function , we show that the largest Banach space of functions which can be efficiently approximated by the corresponding shallow neural networks is the space whose norm is given by the gauge of the closed convex hull of the set . We characterize this space for the ReLU and cosine activation functions and, in particular, show that the resulting gauge space is equivalent to the spectral Barron space if and is equivalent to the Barron space when . Our main result establishes the precise asymptotics of the -metric entropy of the unit ball of these guage spaces and, as a consequence, the optimal approximation rates for shallow ReLU networks. The sharpest previous results hold only in the special case that and , where the metric entropy has been determined up to logarithmic factors. When or , there is a significant gap between the previous best upper and lower bounds. We close all of these gaps and determine the precise asymptotics of the metric entropy for all and , including removing the logarithmic factors previously mentioned. Finally, we use these results to quantify how much is lost by Barron's spectral condition relative to the convex hull of when . Finally, we also show that the orthogonal greedy algorithm can algorithmically realize the improved approximation rates which have been derived.
View on arXiv