On approximating with neural networks

Abstract
Consider a feedforward neural network such that , where is a smooth function, therefore must satisfy pointwise. We prove a theorem that a network with more than one hidden layer can only represent one feature in its first hidden layer; this is a dramatic departure from the well-known results for one hidden layer. The proof of the theorem is straightforward, where two backward paths and a weight-tying matrix play the key roles. We then present the alternative, the implicit parametrization, where the neural network is and ; in addition, a "soft analysis" of gives a dual perspective on the theorem. Throughout, we come back to recent probabilistic models that are formulated as , and conclude with a critique of denoising autoencoders.
View on arXivComments on this paper