Function approximation by deep neural networks with parameters

Abstract
In this paper it is shown that -smooth functions can be approximated by deep neural networks with ReLU activation function and with parameters . The and parameter norms of considered networks are thus equivalent. The depth, width and the number of active parameters of the constructed networks have, up to a logarithmic factor, the same dependence on the approximation error as the networks with parameters in . In particular, this means that the nonparametric regression estimation with the constructed networks attains the same convergence rate as with sparse networks with parameters in .
View on arXivComments on this paper