201
105

Uniform Asymptotic Inference and the Bootstrap After Model Selection

Abstract

Recently, Taylor et al. (2014) developed a method for making inferences on parameters after model selection, in a regression setting with normally distributed errors. In this work, we study the large sample properties of this method, without assuming normality. We prove that the test statistic of Taylor et al. (2014) is asymptotically pivotal, as the number of samples n grows and the dimension d of the regression problem stays fixed; our asymptotic result is uniformly valid over a wide class of nonnormal error distributions. We also propose an efficient bootstrap version of this test that is provably (asymptotically) conservative, and in practice, often delivers shorter confidence intervals that the original normality-based approach. Finally, we prove that the test statistic of Taylor et al. (2014) does not converge uniformly in a high-dimensional setting, when the dimension d is allowed grow. Recently, developed a method for making inferences on parameters after model selection. In this work, we explore the large sample properties of this method. We present results about the uniform convergence of the test statistic proposed by Taylor et al. (2014), and we propose computationally efficient bootstrap version of the method.

View on arXiv
Comments on this paper