Negative results for approximation using single layer and multilayer feedforward neural networks

Abstract
We prove a negative result for the approximation of functions defined on compact subsets of (where ) using feedforward neural networks with one hidden layer and arbitrary continuous activation function. In a nutshell, this result claims the existence of target functions that are as difficult to approximate using these neural networks as one may want. We also demonstrate an analogous result (for general ) for neural networks with an \emph{arbitrary} number of hidden layers, for activation functions that are either rational functions or continuous splines with finitely many pieces.
View on arXivComments on this paper