47
34

Large-time asymptotics in deep learning

Abstract

We consider the neural ODE perspective of supervised learning and study the impact of the final time TT (which may indicate the depth of a corresponding ResNet) in training. For the classical L2L^2--regularized empirical risk minimization problem, whenever the neural ODE dynamics are homogeneous with respect to the parameters, we show that the training error is at most of the order O(1T)\mathcal{O}\left(\frac{1}{T}\right). Furthermore, if the loss inducing the empirical risk attains its minimum, the optimal parameters converge to minimal L2L^2--norm parameters which interpolate the dataset. By a natural scaling between TT and the regularization hyperparameter λ\lambda we obtain the same results when λ0\lambda\searrow0 and TT is fixed. This allows us to stipulate generalization properties in the overparametrized regime, now seen from the large depth, neural ODE perspective. To enhance the polynomial decay, inspired by turnpike theory in optimal control, we propose a learning problem with an additional integral regularization term of the neural ODE trajectory over [0,T][0,T]. In the setting of p\ell^p--distance losses, we prove that both the training error and the optimal parameters are at most of the order O(eμt)\mathcal{O}\left(e^{-\mu t}\right) in any t[0,T]t\in[0,T]. The aforementioned stability estimates are also shown for continuous space-time neural networks, taking the form of nonlinear integro-differential equations. By using a time-dependent moving grid for discretizing the spatial variable, we demonstrate that these equations provide a framework for addressing ResNets with variable widths.

View on arXiv
Comments on this paper

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.