ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2409.06555
26
2

Deep Neural Networks: Multi-Classification and Universal Approximation

10 September 2024
Martín Hernández
Enrique Zuazua
ArXivPDFHTML
Abstract

We demonstrate that a ReLU deep neural network with a width of 222 and a depth of 2N+4M−12N+4M-12N+4M−1 layers can achieve finite sample memorization for any dataset comprising NNN elements in Rd\mathbb{R}^dRd, where d≥1,d\ge1,d≥1, and MMM classes, thereby ensuring accurate classification. By modeling the neural network as a time-discrete nonlinear dynamical system, we interpret the memorization property as a problem of simultaneous or ensemble controllability. This problem is addressed by constructing the network parameters inductively and explicitly, bypassing the need for training or solving any optimization problem. Additionally, we establish that such a network can achieve universal approximation in Lp(Ω;R+)L^p(\Omega;\mathbb{R}_+)Lp(Ω;R+​), where Ω\OmegaΩ is a bounded subset of Rd\mathbb{R}^dRd and p∈[1,∞)p\in[1,\infty)p∈[1,∞), using a ReLU deep neural network with a width of d+1d+1d+1. We also provide depth estimates for approximating W1,pW^{1,p}W1,p functions and width estimates for approximating Lp(Ω;Rm)L^p(\Omega;\mathbb{R}^m)Lp(Ω;Rm) for m≥1m\geq1m≥1. Our proofs are constructive, offering explicit values for the biases and weights involved.

View on arXiv
Comments on this paper