Statistical inference and feasibility determination: a nonasymptotic approach

We develop non-asymptotically justified methods for hypothesis testing about the dimensional coefficients in (possibly nonlinear) regression models. Given a function , we consider the null hypothesis against the alternative hypothesis , where is a nonempty closed subset of and can be nonlinear in . Our (nonasymptotic) control on the Type I and Type II errors holds for fixed and does not rely on well-behaved estimation error or prediction error; in particular, when the number of restrictions in is large relative to , we show it is possible to bypass the sparsity assumption on (for both Type I and Type II error control), regularization on the estimates of , and other inherent challenges in an inverse problem. We also demonstrate an interesting link between our framework and Farkas' lemma (in math programming) under uncertainty, which points to some potential applications of our method outside traditional hypothesis testing.
View on arXiv