In an indirect Gaussian sequence space model lower and upper bounds are derived for the concentration rate of the posterior distribution of the parameter of interest shrinking to the parameter value that generates the data. While this establishes posterior consistency, however, the concentration rate depends on both and a tuning parameter which enters the prior distribution. We first provide an oracle optimal choice of the tuning parameter, i.e., optimized for each separately. The optimal choice of the prior distribution allows us to derive an oracle optimal concentration rate of the associated posterior distribution. Moreover, for a given class of parameters and a suitable choice of the tuning parameter, we show that the resulting uniform concentration rate over the given class is optimal in a minimax sense. Finally, we construct a hierarchical prior that is adaptive. This means that, given a parameter or a class of parameters, respectively, the posterior distribution contracts at the oracle rate or at the minimax rate over the class. Notably, the hierarchical prior does not depend neither on nor on the given class. Moreover, convergence of the fully data-driven Bayes estimator at the oracle or at the minimax rate is established.
View on arXiv