Approximation in with deep ReLU neural networks

We discuss the expressive power of neural networks which use the non-smooth ReLU activation function by analyzing the approximation theoretic properties of such networks. The existing results mainly fall into two categories: approximation using ReLU networks with a fixed depth, or using ReLU networks whose depth increases with the approximation accuracy. After reviewing these findings, we show that the results concerning networks with fixed depth--- which up to now only consider approximation in for the Lebesgue measure --- can be generalized to approximation in , for any finite Borel measure . In particular, the generalized results apply in the usual setting of statistical learning theory, where one is interested in approximation in , with the probability measure describing the distribution of the data.
View on arXiv