The Neyman-Pearson region of a simple binary hypothesis testing is the set of points whose coordinates represent the false positive rate and false negative rate of some test. The lower boundary of this region is given by the Neyman-Pearson lemma, and is up to a coordinate change, equivalent to the optimal ROC curve. We establish a novel lower bound for the boundary in terms of any -divergence. Since the bound generated by hockey-stick -divergences characterizes the Neyman-Pearson boundary, this bound is best possible. In the case of KL divergence, this bound improves Pinsker's inequality. Furthermore, we obtain a closed-form refined upper bound for the Neyman-Pearson boundary in terms of the Chernoff -coefficient. Finally, we present methods for constructing pairs of distributions that can approximately or exactly realize any given Neyman-Pearson boundary.
View on arXiv@article{mullhaupt2025_2505.08899, title={ Bounding Neyman-Pearson Region with $f$-Divergences }, author={ Andrew Mullhaupt and Cheng Peng }, journal={arXiv preprint arXiv:2505.08899}, year={ 2025 } }