389

Disentangling feature and lazy learning in deep neural networks: an empirical study

Abstract

Two distinct limits for deep learning as the net width hh\to\infty have been proposed, depending on how the weights of the last layer scale with hh. In the "lazy-learning" regime, the dynamics becomes linear in the weights and is described by a Neural Tangent Kernel Θ\Theta. By contrast, in the "feature-learning" regime, the dynamics can be expressed in terms of the density distribution of the weights. Understanding which regime describes accurately practical architectures and which one leads to better performance remains a challenge. We answer these questions and produce new characterizations of these regimes for the MNIST data set, by considering deep nets ff whose last layer of weights scales as αh\frac{\alpha}{\sqrt{h}} at initialization, where α\alpha is a parameter we vary. We performed systematic experiments on two setups (A) fully-connected Softplus momentum full batch and (B) convolutional ReLU momentum stochastic. We find that (1) α=1h\alpha^*=\frac{1}{\sqrt{h}} separates the two regimes. (2) for (A) and (B) feature learning outperforms lazy learning, a difference in performance that decreases with hh and becomes hardly detectable asymptotically for (A) but is very significant for (B). (3) In both regimes, the fluctuations δf\delta f induced by initial conditions on the learned function follow δf1/h\delta f\sim1/\sqrt{h}, leading to a performance that increases with hh. This improvement can be instead obtained at intermediate hh values by ensemble averaging different networks. (4) In the feature regime there exists a time scale t1αht_1\sim\alpha\sqrt{h}, such that for tt1t\ll t_1 the dynamics is linear. At tt1t\sim t_1, the output has grown by a magnitude h\sqrt{h} and the changes of the tangent kernel ΔΘ\|\Delta\Theta\| become significant. Ultimately, it follows ΔΘ(hα)a\|\Delta\Theta\|\sim(\sqrt{h}\alpha)^{-a} for ReLU and Softplus activation, with a<2a<2 & a2a\to2 when depth grows.

View on arXiv
Comments on this paper