ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2309.10639
39
4

Geometric structure of Deep Learning networks and construction of global L2{\mathcal L}^2L2 minimizers

19 September 2023
Thomas Chen
Patrícia Muñoz Ewald
ArXivPDFHTML
Abstract

In this paper, we explicitly determine local and global minimizers of the L2\mathcal{L}^2L2 cost function in underparametrized Deep Learning (DL) networks; our main goal is to shed light on their geometric structure and properties. We accomplish this by a direct construction, without invoking the gradient descent flow at any point of this work. We specifically consider LLL hidden layers, a ReLU ramp activation function, an L2\mathcal{L}^2L2 Schatten class (or Hilbert-Schmidt) cost function, input and output spaces RQ\mathbb{R}^QRQ with equal dimension Q≥1Q\geq1Q≥1, and hidden layers also defined on RQ\mathbb{R}^{Q}RQ; the training inputs are assumed to be sufficiently clustered. The training input size NNN can be arbitrarily large - thus, we are considering the underparametrized regime. More general settings are left to future work. We construct an explicit family of minimizers for the global minimum of the cost function in the case L≥QL\geq QL≥Q, which we show to be degenerate. Moreover, we determine a set of 2Q−12^Q-12Q−1 distinct degenerate local minima of the cost function. In the context presented here, the concatenation of hidden layers of the DL network is reinterpreted as a recursive application of a {\em truncation map} which "curates" the training inputs by minimizing their noise to signal ratio.

View on arXiv
Comments on this paper