43
0

Computing High-dimensional Confidence Sets for Arbitrary Distributions

Abstract

We study the problem of learning a high-density region of an arbitrary distribution over Rd\mathbb{R}^d. Given a target coverage parameter δ\delta, and sample access to an arbitrary distribution DD, we want to output a confidence set SRdS \subset \mathbb{R}^d such that SS achieves δ\delta coverage of DD, i.e., PyD[yS]δ\mathbb{P}_{y \sim D} \left[ y \in S \right] \ge \delta, and the volume of SS is as small as possible. This is a central problem in high-dimensional statistics with applications in finding confidence sets, uncertainty quantification, and support estimation.In the most general setting, this problem is statistically intractable, so we restrict our attention to competing with sets from a concept class CC with bounded VC-dimension. An algorithm is competitive with class CC if, given samples from an arbitrary distribution DD, it outputs in polynomial time a set that achieves δ\delta coverage of DD, and whose volume is competitive with the smallest set in CC with the required coverage δ\delta. This problem is computationally challenging even in the basic setting when CC is the set of all Euclidean balls. Existing algorithms based on coresets find in polynomial time a ball whose volume is exp(O~(d/logd))\exp(\tilde{O}( d/ \log d))-factor competitive with the volume of the best ball.Our main result is an algorithm that finds a confidence set whose volume is exp(O~(d1/2))\exp(\tilde{O}(d^{1/2})) factor competitive with the optimal ball having the desired coverage. The algorithm is improper (it outputs an ellipsoid). Combined with our computational intractability result for proper learning balls within an exp(O~(d1o(1)))\exp(\tilde{O}(d^{1-o(1)})) approximation factor in volume, our results provide an interesting separation between proper and (improper) learning of confidence sets.

View on arXiv
@article{gao2025_2504.02723,
  title={ Computing High-dimensional Confidence Sets for Arbitrary Distributions },
  author={ Chao Gao and Liren Shan and Vaidehi Srinivas and Aravindan Vijayaraghavan },
  journal={arXiv preprint arXiv:2504.02723},
  year={ 2025 }
}
Comments on this paper