We investigate the sample complexity of networks with bounds on the magnitude of its weights. In particular, we consider the class \[ H=\left\{W_t\circ\rho\circ \ldots\circ\rho\circ W_{1} :W_1,\ldots,W_{t-1}\in M_{d, d}, W_t\in M_{1,d}\right\} \] where the spectral norm of each is bounded by , the Frobenius norm is bounded by , and is the sigmoid function or the smoothened ReLU function . We show that for any depth , if the inputs are in , the sample complexity of is . This bound is optimal up to log-factors, and substantially improves over the previous state of the art of . We furthermore show that this bound remains valid if instead of considering the magnitude of the 's, we consider the magnitude of , where are some reference matrices, with spectral norm of . By taking the to be the matrices at the onset of the training process, we get sample complexity bounds that are sub-linear in the number of parameters, in many typical regimes of parameters. To establish our results we develop a new technique to analyze the sample complexity of families of predictors. We start by defining a new notion of a randomized approximate description of functions . We then show that if there is a way to approximately describe functions in a class using bits, then examples suffices to guarantee uniform convergence. Namely, that the empirical loss of all the functions in the class is -close to the true loss. Finally, we develop a set of tools for calculating the approximate description length of classes of functions that can be presented as a composition of linear function classes and non-linear functions.
View on arXiv