The foundations of deep learning are supported by the seemingly opposing perspectives of approximation or learning theory. The former advocates for large/expressive models that need not generalize, while the latter considers classes that generalize but may be too small/constrained to be universal approximators. Motivated by real-world deep learning implementations that are both expressive and statistically reliable, we ask: "Is there a class of neural networks that is both large enough to be universal but structured enough to generalize?" This paper constructively provides a positive answer to this question by identifying a highly structured class of ReLU multilayer perceptions (MLPs), which are optimal function approximators and are statistically well-behaved. We show that any -Hölder function from to can be approximated to a uniform error on with a sparsely connected ReLU MLP with the same Hölder exponent and coefficient , of width , depth , with nonzero parameters, and whose weights and biases take values in except in the first and last layers which instead have magnitude at-most . Further, our class of MLPs achieves a near-optimal sample complexity of when given i.i.d. normalized sub-Gaussian training samples. We achieve this through a new construction that perfectly fits together linear pieces using Kuhn triangulations, along with a new proof technique which shows that our construction preserves the regularity of not only the Hölder functions, but also any uniformly continuous function. Our results imply that neural networks can solve the McShane extension problem on suitable finite sets.
View on arXiv@article{hong2025_2409.12335, title={ Bridging the Gap Between Approximation and Learning via Optimal Approximation by ReLU MLPs of Maximal Regularity }, author={ Ruiyang Hong and Anastasis Kratsios }, journal={arXiv preprint arXiv:2409.12335}, year={ 2025 } }