ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.06526
16
2

Overparametrized linear dimensionality reductions: From projection pursuit to two-layer neural networks

14 June 2022
Andrea Montanari
Kangjie Zhou
ArXivPDFHTML
Abstract

Given a cloud of nnn data points in Rd\mathbb{R}^dRd, consider all projections onto mmm-dimensional subspaces of Rd\mathbb{R}^dRd and, for each such projection, the empirical distribution of the projected points. What does this collection of probability distributions look like when n,dn,dn,d grow large?We consider this question under the null model in which the points are i.i.d. standard Gaussian vectors, focusing on the asymptotic regime in which n,d→∞n,d\to\inftyn,d→∞, with n/d→α∈(0,∞)n/d\to\alpha\in (0,\infty)n/d→α∈(0,∞), while mmm is fixed. Denoting by Fm,α\mathscr{F}_{m, \alpha}Fm,α​ the set of probability distributions in Rm\mathbb{R}^mRm that arise as low-dimensional projections in this limit, we establish new inner and outer bounds on Fm,α\mathscr{F}_{m, \alpha}Fm,α​. In particular, we characterize the Wasserstein radius of Fm,α\mathscr{F}_{m,\alpha}Fm,α​ up to constant multiplicative factors, and determine it exactly for m=1m=1m=1. We also prove sharp bounds in terms of Kullback-Leibler divergence and Rényi information dimension.The previous question has application to unsupervised learning methods, such as projection pursuit and independent component analysis. We introduce a version of the same problem that is relevant for supervised learning, and prove a sharp Wasserstein radius bound. As an application, we establish an upper bound on the interpolation threshold of two-layers neural networks with mmm hidden neurons.

View on arXiv
@article{montanari2025_2206.06526,
  title={ Overparametrized linear dimensionality reductions: From projection pursuit to two-layer neural networks },
  author={ Andrea Montanari and Kangjie Zhou },
  journal={arXiv preprint arXiv:2206.06526},
  year={ 2025 }
}
Comments on this paper