9
15

Solvable Integration Problems and Optimal Sample Size Selection

Abstract

We compute the integral of a function or the expectation of a random variable with minimal cost and use, for our new algorithm and for upper bounds of the complexity, i.i.d. samples. Under certain assumptions it is possible to select a sample size based on a variance estimation, or -- more generally -- based on an estimation of a (central absolute) pp-moment. That way one can guarantee a small absolute error with high probability, the problem is thus called solvable. The expected cost of the method depends on the pp-moment of the random variable, which can be arbitrarily large. In order to prove the optimality of our algorithm we also provide lower bounds. These bounds apply not only to methods based on i.i.d. samples but also to general randomized algorithms. They show that -- up to constants -- the cost of the algorithm is optimal in terms of accuracy, confidence level, and norm of the particular input random variable. Since the considered classes of random variables or integrands are very large, the worst case cost would be infinite. Nevertheless one can define adaptive stopping rules such that for each input the expected cost is finite. We contrast these positive results with examples of integration problems that are not solvable.

View on arXiv
Comments on this paper