ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.07867
13
67

Global Convergence of Deep Networks with One Wide Layer Followed by Pyramidal Topology

18 February 2020
Quynh N. Nguyen
Marco Mondelli
    ODL
    AI4CE
ArXivPDFHTML
Abstract

Recent works have shown that gradient descent can find a global minimum for over-parameterized neural networks where the widths of all the hidden layers scale polynomially with NNN (NNN being the number of training samples). In this paper, we prove that, for deep networks, a single layer of width NNN following the input layer suffices to ensure a similar guarantee. In particular, all the remaining layers are allowed to have constant widths, and form a pyramidal topology. We show an application of our result to the widely used LeCun's initialization and obtain an over-parameterization requirement for the single wide layer of order N2.N^2.N2.

View on arXiv
Comments on this paper