On Bayes Risk Lower Bounds

This paper provides a general technique to lower bound the Bayes risk for arbitrary loss functions and prior distributions in the standard abstract decision theoretic setting. A lower bound on the Bayes risk not only serves as a lower bound on the minimax risk but also characterizes the fundamental limitations of the statistical difficulty of a decision problem under a given prior. Our bounds are based on the notion of -informativity of the underlying class of probability measures and the prior. Application of our bounds requires upper bounds on the -informativity and we derive new upper bounds on -informativity for a class of functions which lead to tight Bayes risk lower bounds. Our technique leads to generalizations of a variety of classical minimax bounds (e.g., generalized Fano's inequality). Using our Bayes risk lower bound, we provide a succinct proof to the main result of Chatterjee [2014]: for estimating mean of a Gaussian random vector under convex constraint, least squares estimator is always admissible up to a constant.
View on arXiv