ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.03909
32
0

Wide stable neural networks: Sample regularity, functional convergence and Bayesian inverse problems

4 July 2024
Tomás Soto
ArXivPDFHTML
Abstract

We study the large-width asymptotics of random fully connected neural networks with weights drawn from α\alphaα-stable distributions, a family of heavy-tailed distributions arising as the limiting distributions in the Gnedenko-Kolmogorov heavy-tailed central limit theorem. We show that in an arbitrary bounded Euclidean domain U\mathcal{U}U with smooth boundary, the random field at the infinite-width limit, characterized in previous literature in terms of finite-dimensional distributions, has sample functions in the fractional Sobolev-Slobodeckij-type quasi-Banach function space Ws,p(U)W^{s,p}(\mathcal{U})Ws,p(U) for integrability indices p<αp < \alphap<α and suitable smoothness indices sss depending on the activation function of the neural network, and establish the functional convergence of the processes in P(Ws,p(U))\mathcal{P}(W^{s,p}(\mathcal{U}))P(Ws,p(U)). This convergence result is leveraged in the study of functional posteriors for edge-preserving Bayesian inverse problems with stable neural network priors.

View on arXiv
Comments on this paper