ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2008.11840
21
17

Out-of-sample error estimate for robust M-estimators with convex penalty

26 August 2020
Pierre C. Bellec
ArXivPDFHTML
Abstract

A generic out-of-sample error estimate is proposed for robust MMM-estimators regularized with a convex penalty in high-dimensional linear regression where (X,y)(X,y)(X,y) is observed and p,np,np,n are of the same order. If ψ\psiψ is the derivative of the robust data-fitting loss ρ\rhoρ, the estimate depends on the observed data only through the quantities ψ^=ψ(y−Xβ^)\hat\psi = \psi(y-X\hat\beta)ψ^​=ψ(y−Xβ^​), X⊤ψ^X^\top \hat\psiX⊤ψ^​ and the derivatives (∂/∂y)ψ^(\partial/\partial y) \hat\psi(∂/∂y)ψ^​ and (∂/∂y)Xβ^(\partial/\partial y) X\hat\beta(∂/∂y)Xβ^​ for fixed XXX. The out-of-sample error estimate enjoys a relative error of order n−1/2n^{-1/2}n−1/2 in a linear model with Gaussian covariates and independent noise, either non-asymptotically when p/n≤γp/n\le \gammap/n≤γ or asymptotically in the high-dimensional asymptotic regime p/n→γ′∈(0,∞)p/n\to\gamma'\in(0,\infty)p/n→γ′∈(0,∞). General differentiable loss functions ρ\rhoρ are allowed provided that ψ=ρ′\psi=\rho'ψ=ρ′ is 1-Lipschitz. The validity of the out-of-sample error estimate holds either under a strong convexity assumption, or for the ℓ1\ell_1ℓ1​-penalized Huber M-estimator if the number of corrupted observations and sparsity of the true β\betaβ are bounded from above by s∗ns_*ns∗​n for some small enough constant s∗∈(0,1)s_*\in(0,1)s∗​∈(0,1) independent of n,pn,pn,p. For the square loss and in the absence of corruption in the response, the results additionally yield n−1/2n^{-1/2}n−1/2-consistent estimates of the noise variance and of the generalization error. This generalizes, to arbitrary convex penalty, estimates that were previously known for the Lasso.

View on arXiv
Comments on this paper