Approximation capabilities of neural networks on unbounded domains

Abstract
In this paper, we prove that a shallow neural network with a monotone sigmoid, ReLU, ELU, Softplus, or LeakyReLU activation function can arbitrarily well approximate any L^p(p>=2) integrable functions defined on R*[0,1]^n. We also prove that a shallow neural network with a sigmoid, ReLU, ELU, Softplus, or LeakyReLU activation function expresses no nonzero integrable function defined on the Euclidean plane. Together with a recent result that the deep ReLU network can arbitrarily well approximate any integrable function on Euclidean spaces, we provide a new perspective on the advantage of multiple hidden layers in the context of ReLU networks. Lastly, we prove that the ReLU network with depth 3 is a universal approximator in L^p(R^n).
View on arXivComments on this paper