Generalization Bounds for High-dimensional M-estimation under Sparsity Constraint

The -constrained empirical risk minimization (-ERM) is a promising tool for high-dimensional statistical estimation. The existing analysis of -ERM estimator is mostly on parameter estimation and support recovery consistency. From the perspective of statistical learning, another fundamental question is how well the -ERM estimator would perform on unseen samples. The answer to this question is important for understanding the learnability of such a non-convex (and also NP-hard) M-estimator but still relatively under explored. In this paper, we investigate this problem and develop a generalization theory for -ERM. We establish, in both white-box and black-box statistical regimes, a set of generalization gap and excess risk bounds for -ERM to characterize its sparse prediction and optimization capability. Our theory mainly reveals three findings: 1) tighter generalization bounds can be attained by -ERM than those of -ERM if the risk function is (with high probability) restricted strongly convex; 2) tighter uniform generalization bounds can be established for -ERM than the conventional dense ERM; and 3) sparsity level invariant bounds can be established by imposing additional strong-signal conditions to ensure the stability of -ERM. In light of these results, we further provide generalization guarantees for the Iterative Hard Thresholding (IHT) algorithm which serves as one of the most popular greedy pursuit methods for approximately solving -ERM. Numerical evidence is provided to confirm our theoretical predictions when implied to sparsity-constrained linear regression and logistic regression models.
View on arXiv