24
13

On one-sample Bayesian tests for the mean

Abstract

This paper deals with a new Bayesian approach to the standard one-sample zz- and tt- tests. More specifically, let x1,,xnx_1,\ldots,x_n be an independent random sample from a normal distribution with mean μ\mu and variance σ2\sigma^2. The goal is to test the null hypothesis H0:μ=μ1\mathcal{H}_0: \mu=\mu_1 against all possible alternatives. The approach is based on using the well-known formula of the Kullbak-Leibler divergence between two normal distributions (sampling and hypothesized distributions selected in an appropriate way). The change of the distance from a priori to a posteriori is compared through the relative belief ratio (a measure of evidence). Eliciting the prior, checking for prior-data conflict and bias are also considered. Many theoretical properties of the procedure have been developed. Besides it's simplicity, and unlike the classical approach, the new approach possesses attractive and distinctive features such as giving evidence in favor of the null hypothesis. It also avoids several undesirable paradoxes, such as Lindley's paradox that may be encountered by some existing Bayesian methods. The use of the approach has been illustrated through several examples.

View on arXiv
Comments on this paper