ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1407.4729
90
58
v1v2v3 (latest)

Sparse Partially Linear Additive Models

17 July 2014
Yin Lou
Jacob Bien
R. Caruana
J. Gehrke
ArXiv (abs)PDFHTML
Abstract

The generalized partially linear additive model (GPLAM) is a flexible and interpretable approach to building predictive models. It combines features in an additive manner, allowing them to have either a linear or nonlinear effect on the response. However, the assignment of features to the linear and nonlinear groups is typically assumed known. Thus, to make a GPLAM a viable approach in situations in which little is known aprioriaprioriapriori about the features, one must overcome two primary model selection challenges: deciding which features to include in the model and determining which features to treat nonlinearly. We introduce sparse partially linear additive models (SPLAMs), which combine model fitting and bothbothboth of these model selection challenges into a single convex optimization problem. SPLAM provides a bridge between the Lasso and sparse additive models. Through a statistical oracle inequality and thorough simulation, we demonstrate that SPLAM can outperform other methods across a broad spectrum of statistical regimes, including the high-dimensional (p≫Np\gg Np≫N) setting. We develop efficient algorithms that are applied to real data sets with half a million samples and over 45,000 features with excellent predictive performance.

View on arXiv
Comments on this paper