Disentangling feature and lazy learning in deep neural networks: an
empirical study
Two distinct limits for deep learning as the net width have been proposed, depending on how the weights of the last layer scale with . In the "lazy-learning" regime, the dynamics becomes linear in the weights and is described by a Neural Tangent Kernel . By contrast, in the "feature-learning" regime, the dynamics can be expressed in terms of the density distribution of the weights. Understanding which regime describes accurately practical architectures and which one leads to better performance remains a challenge. We answer these questions and produce new characterizations of these regimes for the MNIST data set, by considering deep nets whose last layer of weights scales as at initialization, where is a parameter we vary. We performed systematic experiments on two setups (A) fully-connected Softplus momentum full batch and (B) convolutional ReLU momentum stochastic. We find that (1) separates the two regimes. (2) for (A) and (B) feature learning outperforms lazy learning, a difference in performance that decreases with and becomes hardly detectable asymptotically for (A) but is very significant for (B). (3) In both regimes, the fluctuations induced by initial conditions on the learned function follow , leading to a performance that increases with . This improvement can be instead obtained at intermediate values by ensemble averaging different networks. (4) In the feature regime there exists a time scale , such that for the dynamics is linear. At , the output has grown by a magnitude and the changes of the tangent kernel become significant. Ultimately, it follows for ReLU and Softplus activation, with & when depth grows.
View on arXiv