We study the fundamental problem of estimating an unknown discrete distribution over symbols, given i.i.d. samples from the distribution. We are interested in minimizing the KL divergence between the true distribution and the algorithm's estimate. We first construct minimax optimal private estimators. Minimax optimality however fails to shed light on an algorithm's performance on individual (non-worst-case) instances and simple minimax-optimal DP estimators can have poor empirical performance on real distributions. We then study this problem from an instance-optimality viewpoint, where the algorithm's error on is compared to the minimum achievable estimation error over a small local neighborhood of . Under natural notions of local neighborhood, we propose algorithms that achieve instance-optimality up to constant factors, with and without a differential privacy constraint. Our upper bounds rely on (private) variants of the Good-Turing estimator. Our lower bounds use additive local neighborhoods that more precisely captures the hardness of distribution estimation in KL divergence, compared to ones considered in prior works.
View on arXiv@article{ye2025_2505.23620, title={ Instance-Optimality for Private KL Distribution Estimation }, author={ Jiayuan Ye and Vitaly Feldman and Kunal Talwar }, journal={arXiv preprint arXiv:2505.23620}, year={ 2025 } }