ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1207.6868
102
86

The BerHu penalty and the grouped effect

30 July 2012
Laurent Zwald
S. Lambert-Lacroix
ArXivPDFHTML
Abstract

The Huber's criterion is a useful method for robust regression. The adaptive least absolute shrinkage and selection operator (lasso) is a popular technique for simultaneous estimation and variable selection. In the case of small sample size and large covariables numbers, this penalty is not very satisfactory variable selection method. In this paper, we introduce an adaptive reversed version of Huber's criterion as a penalty function. We call this penalty adaptive Berhu penalty. As for elastic net penalty, small coefficients contribute their ℓ1\ell_1ℓ1​ norm to this penalty while larger coefficients cause it to grow quadratically (as ridge regression). We show that the estimator associated with criterion such that ordinary least square or Huber's one combining with adaptive Berhu penalty enjoys the oracle properties. In addition, this procedure encourages a grouping effect. This approach is compared with adaptive elastic net regularization. Extensive simulation studies demonstrate satisfactory finite-sample performance of such procedure. A real example is analyzed for illustration purposes. Keywords : Adaptive Berhu penalty; concomitant scale; elastic net penalty; Huber's criterion; oracle property; robust estimation.

View on arXiv
Comments on this paper