13
5

Function approximation by deep neural networks with parameters {0,±12,±1,2}\{0,\pm \frac{1}{2}, \pm 1, 2\}

Abstract

In this paper it is shown that CβC_\beta-smooth functions can be approximated by deep neural networks with ReLU activation function and with parameters {0,±12,±1,2}\{0,\pm \frac{1}{2}, \pm 1, 2\}. The l0l_0 and l1l_1 parameter norms of considered networks are thus equivalent. The depth, width and the number of active parameters of the constructed networks have, up to a logarithmic factor, the same dependence on the approximation error as the networks with parameters in [1,1][-1,1]. In particular, this means that the nonparametric regression estimation with the constructed networks attains the same convergence rate as with sparse networks with parameters in [1,1][-1,1].

View on arXiv
Comments on this paper