ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2409.12335
31
1
v1v2v3v4 (latest)

Bridging the Gap Between Approximation and Learning via Optimal Approximation by ReLU MLPs of Maximal Regularity

18 September 2024
Ruiyang Hong
Anastasis Kratsios
ArXiv (abs)PDFHTML
Main:58 Pages
11 Figures
Bibliography:6 Pages
1 Tables
Abstract

The foundations of deep learning are supported by the seemingly opposing perspectives of approximation or learning theory. The former advocates for large/expressive models that need not generalize, while the latter considers classes that generalize but may be too small/constrained to be universal approximators. Motivated by real-world deep learning implementations that are both expressive and statistically reliable, we ask: "Is there a class of neural networks that is both large enough to be universal but structured enough to generalize?" This paper constructively provides a positive answer to this question by identifying a highly structured class of ReLU multilayer perceptions (MLPs), which are optimal function approximators and are statistically well-behaved. We show that any (L,α)(L,\alpha)(L,α)-Hölder function from [0,1]d[0,1]^d[0,1]d to [−n,n][-n,n][−n,n] can be approximated to a uniform O(1/n)\mathcal{O}(1/n)O(1/n) error on [0,1]d[0,1]^d[0,1]d with a sparsely connected ReLU MLP with the same Hölder exponent α\alphaα and coefficient LLL, of width O(dnd/α)\mathcal{O}(dn^{d/\alpha})O(dnd/α), depth O(log⁡(d))\mathcal{O}(\log(d))O(log(d)), with O(dnd/α)\mathcal{O}(dn^{d/\alpha})O(dnd/α) nonzero parameters, and whose weights and biases take values in {0,±1/2}\{0,\pm 1/2\}{0,±1/2} except in the first and last layers which instead have magnitude at-most nnn. Further, our class of MLPs achieves a near-optimal sample complexity of O(log⁡(N)/N)\mathcal{O}(\log(N)/\sqrt{N})O(log(N)/N​) when given NNN i.i.d. normalized sub-Gaussian training samples. We achieve this through a new construction that perfectly fits together linear pieces using Kuhn triangulations, along with a new proof technique which shows that our construction preserves the regularity of not only the Hölder functions, but also any uniformly continuous function. Our results imply that neural networks can solve the McShane extension problem on suitable finite sets.

View on arXiv
@article{hong2025_2409.12335,
  title={ Bridging the Gap Between Approximation and Learning via Optimal Approximation by ReLU MLPs of Maximal Regularity },
  author={ Ruiyang Hong and Anastasis Kratsios },
  journal={arXiv preprint arXiv:2409.12335},
  year={ 2025 }
}
Comments on this paper