35
32

Importance Sampling and Necessary Sample Size: an Information Theory Approach

Abstract

Importance sampling approximates expectations with respect to a target measure by using samples from a proposal measure. The performance of the method over large classes of test functions depends heavily on the closeness between both measures. We derive a general bound that needs to hold for importance sampling to be successful, and relates the ff-divergence between the target and the proposal to the sample size. The bound is deduced from a new and simple information theory paradigm for the study of importance sampling. As examples of the general theory we give necessary conditions on the sample size in terms of the Kullback-Leibler and χ2\chi^2 divergences, and the total variation and Hellinger distances. Our approach is non-asymptotic, and its generality allows to tell apart the relative merits of these metrics. Unsurprisingly, the non-symmetric divergences give sharper bounds than total variation or Hellinger. Our results extend existing necessary conditions -and complement sufficient ones- on the sample size required for importance sampling.

View on arXiv
Comments on this paper