The Representation Power of Neural Networks: Breaking the Curse of Dimensionality

Abstract
In this paper, we analyze the number of neurons and training parameters that a neural networks needs to approximate multivariate functions of bounded second mixed derivatives -- Koborov functions. We prove upper bounds on these quantities for shallow and deep neural networks, breaking the curse of dimensionality. Our bounds hold for general activation functions, including ReLU. We further prove that these bounds nearly match the minimal number of parameters any continuous function approximator needs to approximate Koborov functions, showing that neural networks are near-optimal function approximators.
View on arXivComments on this paper