We study the problem of online conditional distribution estimation with \emph{unbounded} label sets under local differential privacy. Let be a distribution-valued function class with unbounded label set. We aim at estimating an \emph{unknown} function in an online fashion so that at time when the context is provided we can generate an estimate of under KL-divergence knowing only a privatized version of the true labels sampling from . The ultimate objective is to minimize the cumulative KL-risk of a finite horizon . We show that under -local differential privacy of the privatized labels, the KL-risk grows as upto poly-logarithmic factors where . This is in stark contrast to the bound demonstrated by Wu et al. (2023a) for bounded label sets. As a byproduct, our results recover a nearly tight upper bound for the hypothesis selection problem of gopi et al. (2020) established only for the batch setting.
View on arXiv