8
0

Finite-sample and asymptotic analysis of generalization ability with an application to penalized regression

Abstract

In this paper, we study the performance of extremum estimators from the perspective of generalization ability (GA): the ability of a model to predict outcomes in new samples from the same population. By adapting the classical concentration inequalities, we derive upper bounds on the empirical out-of-sample prediction errors as a function of the in-sample errors, in-sample data size, heaviness in the tails of the error distribution, and model complexity. We show that the error bounds may be used for tuning key estimation hyper-parameters, such as the number of folds KK in cross-validation. We also show how KK affects the bias-variance trade-off for cross-validation. We demonstrate that the L2\mathcal{L}_2-norm difference between penalized and the corresponding un-penalized regression estimates is directly explained by the GA of the estimates and the GA of empirical moment conditions. Lastly, we prove that all penalized regression estimates are L2L_2-consistent for both the npn \geqslant p and the n<pn < p cases. Simulations are used to demonstrate key results. Keywords: generalization ability, upper bound of generalization error, penalized regression, cross-validation, bias-variance trade-off, L2\mathcal{L}_2 difference between penalized and unpenalized regression, lasso, high-dimensional data.

View on arXiv
Comments on this paper