ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 0903.1223
99
178

Sparse Regression Learning by Aggregation and Langevin Monte-Carlo

6 March 2009
A. Dalalyan
Alexandre B. Tsybakov
ArXivPDFHTML
Abstract

We consider the problem of regression learning for deterministic design and independent random errors. We start by proving a sharp PAC-Bayesian type bound for the exponentially weighted aggregate (EWA) under the expected squared empirical loss. For a broad class of noise distributions the presented bound is valid whenever the temperature parameter β\betaβ of the EWA is larger than or equal to 4σ24\sigma^24σ2, where σ2\sigma^2σ2 is the noise variance. A remarkable feature of this result is that it is valid even for unbounded regression functions and the choice of the temperature parameter depends exclusively on the noise level. Next, we apply this general bound to the problem of aggregating the elements of a finite-dimensional linear space spanned by a dictionary of functions ϕ1,...,ϕM\phi_1,...,\phi_Mϕ1​,...,ϕM​. We allow MMM to be much larger than the sample size nnn but we assume that the true regression function can be well approximated by a sparse linear combination of functions ϕj\phi_jϕj​. Under this sparsity scenario, we propose an EWA with a heavy tailed prior and we show that it satisfies a sparsity oracle inequality with leading constant one. Finally, we propose several Langevin Monte-Carlo algorithms to approximately compute such an EWA when the number MMM of aggregated functions can be large. We discuss in some detail the convergence of these algorithms and present numerical experiments that confirm our theoretical findings.

View on arXiv
Comments on this paper