Finite Sample Valid Inference via Calibrated Bootstrap

While widely used as a general method for uncertainty quantification, the bootstrap method encounters difficulties that raise concerns about its validity in practical applications. This paper introduces a new resampling-based method, termed , designed to generate finite sample-valid parametric inference from a sample of size . The central idea is to calibrate an -out-of- resampling scheme, where the calibration parameter is determined against inferential pivotal quantities derived from the cumulative distribution functions of loss functions in parameter estimation. The method comprises two algorithms. The first, named (RA), employs a algorithm to find the value of the calibration parameter for a given in a manner that ensures the resulting -out-of- bootstrapped confidence set is valid. The second algorithm, termed (DR), is developed to further select samples of bootstrapped estimates from the RA step when constructing confidence sets for a range of values is of interest. The proposed method is illustrated and compared to existing methods using linear regression with and without penalty, within the context of a high-dimensional setting and a real-world data application. The paper concludes with remarks on a few open problems worthy of consideration.
View on arXiv