ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.01048
52
7

The Efficacy of L1L_1L1​ Regularization in Two-Layer Neural Networks

2 October 2020
Gen Li
Yuantao Gu
Jie Ding
ArXiv (abs)PDFHTML
Abstract

A crucial problem in neural networks is to select the most appropriate number of hidden neurons and obtain tight statistical risk bounds. In this work, we present a new perspective towards the bias-variance tradeoff in neural networks. As an alternative to selecting the number of neurons, we theoretically show that L1L_1L1​ regularization can control the generalization error and sparsify the input dimension. In particular, with an appropriate L1L_1L1​ regularization on the output layer, the network can produce a statistical risk that is near minimax optimal. Moreover, an appropriate L1L_1L1​ regularization on the input layer leads to a risk bound that does not involve the input data dimension. Our analysis is based on a new amalgamation of dimension-based and norm-based complexity analysis to bound the generalization error. A consequent observation from our results is that an excessively large number of neurons do not necessarily inflate generalization errors under a suitable regularization.

View on arXiv
Comments on this paper