Online Learning of Neural Networks

We study online learning of feedforward neural networks with the sign activation function that implement functions from the unit ball in to a finite label set .First, we characterize a margin condition that is sufficient and in some cases necessary for online learnability of a neural network: Every neuron in the first hidden layer classifies all instances with some margin bounded away from zero. Quantitatively, we prove that for any net, the optimal mistake bound is at most approximately , which is the -totally-separable-packing number, a more restricted variation of the standard -packing number. We complement this result by constructing a net on which any learner makes many mistakes. We also give a quantitative lower bound of approximately when , implying that for some nets and input sequences every learner will err for many times, and that a dimension-free mistake bound is almost always impossible.To remedy this inevitable dependence on , it is natural to seek additional natural restrictions to be placed on the network, so that the dependence on is removed. We study two such restrictions. The first is the multi-index model, in which the function computed by the net depends only on orthonormal directions. We prove a mistake bound of approximately in this model. The second is the extended margin assumption. In this setting, we assume that all neurons (in all layers) in the network classify every ingoing input from previous layer with margin bounded away from zero. In this model, we prove a mistake bound of approximately , where L is the depth of the network.
View on arXiv@article{daniely2025_2505.09167, title={ Online Learning of Neural Networks }, author={ Amit Daniely and Idan Mehalel and Elchanan Mossel }, journal={arXiv preprint arXiv:2505.09167}, year={ 2025 } }