ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1603.09138
61
9

An analysis of penalized interaction models

30 March 2016
Junlong Zhao
Chenlei Leng
ArXivPDFHTML
Abstract

An important consideration for variable selection in interaction models is to design an appropriate penalty that respects hierarchy of the importance of the variables. A common theme is to include an interaction term only after the corresponding main effects are present. In this paper, we study several recently proposed approaches and present a unified analysis on the convergence rate for a class of estimators, when the design satisfies the restricted eigenvalue condition. In particular, we show that with probability tending to one, the resulting estimates have a rate of convergence slog⁡p1/ns\sqrt{\log p_1/n}slogp1​/n​ in the ℓ1\ell_1ℓ1​ error, where p1p_1p1​ is the ambient dimension, sss is the true dimension and nnn is the sample size. We give a new proof that the restricted eigenvalue condition holds with high probability, when the variables in the main effects and the errors follow sub-Gaussian distributions. Under this setup, the interactions no longer follow Gaussian or sub-Gaussian distributions even if the main effects follow Gaussian, and thus existing works are not applicable. This result is of independent interest.

View on arXiv
Comments on this paper