Bernstein - von Mises Theorem for growing parameter dimension

The prominent Bernstein -- von Mises (BvM) Theorem claims that the posterior distribution is asymptotically normal and its mean is nearly the maximum likelihood estimator (MLE), while its variance is nearly the inverse of the total Fisher information matrix, as for the MLE. This result is usually used to justify elliptic credible sets built by Bayes simulations. This paper revisits the classical result from different viewpoints. Particular issues to address are: nonasymptotic framework with just one finite sample, possible model misspecification, and a large parameter dimension. It appears that the BvM result can be extended to any smooth parametric family provided that the dimension p of the parameter space satisfies the condition "(p^{3}/n) is small".
View on arXiv