8
0

Statistical inference and feasibility determination: a nonasymptotic approach

Abstract

We develop non-asymptotically justified methods for hypothesis testing about the pp-dimensional coefficients θ\theta^{*} in (possibly nonlinear) regression models. Given a function h:RpRmh:\,\mathbb{R}^{p}\mapsto\mathbb{R}^{m}, we consider the null hypothesis H0:h(θ)ΩH_{0}:\,h(\theta^{*})\in\Omega against the alternative hypothesis H1:h(θ)ΩH_{1}:\,h(\theta^{*})\notin\Omega, where Ω\Omega is a nonempty closed subset of Rm\mathbb{R}^{m} and hh can be nonlinear in θ\theta^{*}. Our (nonasymptotic) control on the Type I and Type II errors holds for fixed nn and does not rely on well-behaved estimation error or prediction error; in particular, when the number of restrictions in H0H_{0} is large relative to pnp-n, we show it is possible to bypass the sparsity assumption on θ\theta^{*} (for both Type I and Type II error control), regularization on the estimates of θ\theta^{*}, and other inherent challenges in an inverse problem. We also demonstrate an interesting link between our framework and Farkas' lemma (in math programming) under uncertainty, which points to some potential applications of our method outside traditional hypothesis testing.

View on arXiv
Comments on this paper