13
18

A Corrective View of Neural Networks: Representation, Memorization and Learning

Abstract

We develop a corrective mechanism for neural network approximation: the total available non-linear units are divided into multiple groups and the first group approximates the function under consideration, the second group approximates the error in approximation produced by the first group and corrects it, the third group approximates the error produced by the first and second groups together and so on. This technique yields several new representation and learning results for neural networks. First, we show that two-layer neural networks in the random features regime (RF) can memorize arbitrary labels for arbitrary points under under Euclidean distance separation condition using O~(n)\tilde{O}(n) ReLUs which is optimal in nn up to logarithmic factors. Next, we give a powerful representation result for two-layer neural networks with ReLUs and smoothed ReLUs which can achieve a squared error of at most ϵ\epsilon with O(C(a,d)ϵ1/(a+1))O(C(a,d)\epsilon^{-1/(a+1)}) for aN{0}a \in \mathbb{N}\cup\{0\} when the function is smooth enough (roughly when it has Θ(ad)\Theta(ad) bounded derivatives). In certain cases dd can be replaced with effective dimension qdq \ll d. Previous results of this type implement Taylor series approximation using deep architectures. We also consider three-layer neural networks and show that the corrective mechanism yields faster representation rates for smooth radial functions. Lastly, we obtain the first O(subpoly(1/ϵ))O(\mathrm{subpoly}(1/\epsilon)) upper bound on the number of neurons required for a two layer network to learn low degree polynomials up to squared error ϵ\epsilon via gradient descent. Even though deep networks can express these polynomials with O(polylog(1/ϵ))O(\mathrm{polylog}(1/\epsilon)) neurons, the best learning bounds on this problem require poly(1/ϵ)\mathrm{poly}(1/\epsilon) neurons.

View on arXiv
Comments on this paper