16
1

Finite Sample Valid Inference via Calibrated Bootstrap

Abstract

While widely used as a general method for uncertainty quantification, the bootstrap method encounters difficulties that raise concerns about its validity in practical applications. This paper introduces a new resampling-based method, termed calibrated bootstrap\textit{calibrated bootstrap}, designed to generate finite sample-valid parametric inference from a sample of size nn. The central idea is to calibrate an mm-out-of-nn resampling scheme, where the calibration parameter mm is determined against inferential pivotal quantities derived from the cumulative distribution functions of loss functions in parameter estimation. The method comprises two algorithms. The first, named resampling approximation\textit{resampling approximation} (RA), employs a stochastic approximation\textit{stochastic approximation} algorithm to find the value of the calibration parameter m=mαm=m_\alpha for a given α\alpha in a manner that ensures the resulting mm-out-of-nn bootstrapped 1α1-\alpha confidence set is valid. The second algorithm, termed distributional resampling\textit{distributional resampling} (DR), is developed to further select samples of bootstrapped estimates from the RA step when constructing 1α1-\alpha confidence sets for a range of α\alpha values is of interest. The proposed method is illustrated and compared to existing methods using linear regression with and without L1L_1 penalty, within the context of a high-dimensional setting and a real-world data application. The paper concludes with remarks on a few open problems worthy of consideration.

View on arXiv
Comments on this paper