Non-asymptotic theory for nonparametric testing

We consider nonparametric testing in a non-asymptotic framework. Our statistical guarantees are exact in the sense that Type I and II errors are controlled for any finite sample size. Meanwhile, one proposed test is shown to achieve minimax optimality in the asymptotic sense. An important consequence of this non-asymptotic theory is a new and practically useful formula for selecting the optimal smoothing parameter in nonparametric testing. The leading example in this paper is smoothing spline models under Gaussian errors. The results obtained therein can be further generalized to the kernel ridge regression framework under possibly non-Gaussian errors. Simulations demonstrate that our proposed test improves over the conventional asymptotic test when sample size is small to moderate.
View on arXiv