ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.05664
86
2

How DNNs break the Curse of Dimensionality: Compositionality and Symmetry Learning

8 July 2024
Arthur Jacot
Seok Hoan Choi
Yuxiao Wen
    AI4CE
ArXivPDFHTML
Abstract

We show that deep neural networks (DNNs) can efficiently learn any composition of functions with bounded F1F_{1}F1​-norm, which allows DNNs to break the curse of dimensionality in ways that shallow networks cannot. More specifically, we derive a generalization bound that combines a covering number argument for compositionality, and the F1F_{1}F1​-norm (or the related Barron norm) for large width adaptivity. We show that the global minimizer of the regularized loss of DNNs can fit for example the composition of two functions f∗=h∘gf^{*}=h\circ gf∗=h∘g from a small number of observations, assuming ggg is smooth/regular and reduces the dimensionality (e.g. ggg could be the quotient map of the symmetries of f∗f^{*}f∗), so that hhh can be learned in spite of its low regularity. The measures of regularity we consider is the Sobolev norm with different levels of differentiability, which is well adapted to the F1F_{1}F1​ norm. We compute scaling laws empirically and observe phase transitions depending on whether ggg or hhh is harder to learn, as predicted by our theory.

View on arXiv
@article{jacot2025_2407.05664,
  title={ How DNNs break the Curse of Dimensionality: Compositionality and Symmetry Learning },
  author={ Arthur Jacot and Seok Hoan Choi and Yuxiao Wen },
  journal={arXiv preprint arXiv:2407.05664},
  year={ 2025 }
}
Comments on this paper