ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.15809
23
17

Feature Learning in L2L_{2}L2​-regularized DNNs: Attraction/Repulsion and Sparsity

31 May 2022
Arthur Jacot
Eugene Golikov
Clément Hongler
Franck Gabriel
    MLT
ArXivPDFHTML
Abstract

We study the loss surface of DNNs with L2L_{2}L2​ regularization. We show that the loss in terms of the parameters can be reformulated into a loss in terms of the layerwise activations ZℓZ_{\ell}Zℓ​ of the training set. This reformulation reveals the dynamics behind feature learning: each hidden representations ZℓZ_{\ell}Zℓ​ are optimal w.r.t. to an attraction/repulsion problem and interpolate between the input and output representations, keeping as little information from the input as necessary to construct the activation of the next layer. For positively homogeneous non-linearities, the loss can be further reformulated in terms of the covariances of the hidden representations, which takes the form of a partially convex optimization over a convex cone. This second reformulation allows us to prove a sparsity result for homogeneous DNNs: any local minimum of the L2L_{2}L2​-regularized loss can be achieved with at most N(N+1)N(N+1)N(N+1) neurons in each hidden layer (where NNN is the size of the training set). We show that this bound is tight by giving an example of a local minimum that requires N2/4N^{2}/4N2/4 hidden neurons. But we also observe numerically that in more traditional settings much less than N2N^{2}N2 neurons are required to reach the minima.

View on arXiv
Comments on this paper