ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1903.00851
24
13
v1v2v3v4 (latest)

On one-sample Bayesian tests for the mean

3 March 2019
Ibrahim Abdelrazeq
L. Al-Labadi
ArXiv (abs)PDFHTML
Abstract

This paper deals with a new Bayesian approach to the standard one-sample zzz- and ttt- tests. More specifically, let x1,…,xnx_1,\ldots,x_nx1​,…,xn​ be an independent random sample from a normal distribution with mean μ\muμ and variance σ2\sigma^2σ2. The goal is to test the null hypothesis H0:μ=μ1\mathcal{H}_0: \mu=\mu_1H0​:μ=μ1​ against all possible alternatives. The approach is based on using the well-known formula of the Kullbak-Leibler divergence between two normal distributions (sampling and hypothesized distributions selected in an appropriate way). The change of the distance from a priori to a posteriori is compared through the relative belief ratio (a measure of evidence). Eliciting the prior, checking for prior-data conflict and bias are also considered. Many theoretical properties of the procedure have been developed. Besides it's simplicity, and unlike the classical approach, the new approach possesses attractive and distinctive features such as giving evidence in favor of the null hypothesis. It also avoids several undesirable paradoxes, such as Lindley's paradox that may be encountered by some existing Bayesian methods. The use of the approach has been illustrated through several examples.

View on arXiv
Comments on this paper