ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1901.08572
128
95
v1v2v3 (latest)

Width Provably Matters in Optimization for Deep Linear Neural Networks

24 January 2019
S. Du
Wei Hu
ArXiv (abs)PDFHTML
Abstract

We prove that for an LLL-layer fully-connected linear neural network, if the width of every hidden layer is Ω~(L⋅r⋅dout⋅κ3)\tilde\Omega (L \cdot r \cdot d_{\mathrm{out}} \cdot \kappa^3 )Ω~(L⋅r⋅dout​⋅κ3), where rrr and κ\kappaκ are the rank and the condition number of the input data, and doutd_{\mathrm{out}}dout​ is the output dimension, then gradient descent with Gaussian random initialization converges to a global minimum at a linear rate. The number of iterations to find an ϵ\epsilonϵ-suboptimal solution is O(κlog⁡(1ϵ))O(\kappa \log(\frac{1}{\epsilon}))O(κlog(ϵ1​)). Our polynomial upper bound on the total running time for wide deep linear networks and the exp⁡(Ω(L))\exp\left(\Omega\left(L\right)\right)exp(Ω(L)) lower bound for narrow deep linear neural networks [Shamir, 2018] together demonstrate that wide layers are necessary for optimizing deep models.

View on arXiv
Comments on this paper