135
42

Bernstein - von Mises Theorem for growing parameter dimension

Abstract

The prominent Bernstein -- von Mises (BvM) Theorem claims that the posterior distribution is asymptotically normal and its mean is nearly the maximum likelihood estimator (MLE), while its variance is nearly the inverse of the total Fisher information matrix, as for the MLE. This paper revisits the classical result from different viewpoints. Particular issues to address are: nonasymptotic framework with just one finite sample, possible model misspecification, and a large parameter dimension. In particular, in the case of an i.i.d. sample, the BvM result can be stated for any smooth parametric family provided that the dimension (p) of the parameter space satisfies the condition "(p^{3}/n) is small".

View on arXiv
Comments on this paper