24
1

Testing Support Size More Efficiently Than Learning Histograms

Abstract

Consider two problems about an unknown probability distribution pp:1. How many samples from pp are required to test if pp is supported on nn elements or not? Specifically, given samples from pp, determine whether it is supported on at most nn elements, or it is "ϵ\epsilon-far" (in total variation distance) from being supported on nn elements.2. Given mm samples from pp, what is the largest lower bound on its support size that we can produce?The best known upper bound for problem (1) uses a general algorithm for learning the histogram of the distribution pp, which requires Θ(nϵ2logn)\Theta(\tfrac{n}{\epsilon^2 \log n}) samples. We show that testing can be done more efficiently than learning the histogram, using only O(nϵlognlog(1/ϵ))O(\tfrac{n}{\epsilon \log n} \log(1/\epsilon)) samples, nearly matching the best known lower bound of Ω(nϵlogn)\Omega(\tfrac{n}{\epsilon \log n}). This algorithm also provides a better solution to problem (2), producing larger lower bounds on support size than what follows from previous work. The proof relies on an analysis of Chebyshev polynomial approximations outside the range where they are designed to be good approximations, and the paper is intended as an accessible self-contained exposition of the Chebyshev polynomial method.

View on arXiv
@article{jr.2025_2410.18915,
  title={ Testing Support Size More Efficiently Than Learning Histograms },
  author={ Renato Ferreira Pinto Jr. and Nathaniel Harms },
  journal={arXiv preprint arXiv:2410.18915},
  year={ 2025 }
}
Comments on this paper