A constrained risk inequality for general losses

Abstract
We provide a general constrained risk inequality that applies to arbitrary non-decreasing losses, extending a result of Brown and Low [Ann. Stat. 1996]. Given two distributions and , we find a lower bound for the risk of estimating a parameter under given an upper bound on the risk of estimating the parameter under . The inequality is a useful pedagogical tool, as its proof relies only on the Cauchy-Schwartz inequality, it applies to general losses, and it transparently gives risk lower bounds on super-efficient and adaptive estimators.
View on arXivComments on this paper