ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.00315
22
0

Online Distribution Learning with Local Private Constraints

1 February 2024
Jin Sima
Changlong Wu
O. Milenkovic
Wojtek Szpankowski
ArXivPDFHTML
Abstract

We study the problem of online conditional distribution estimation with \emph{unbounded} label sets under local differential privacy. Let F\mathcal{F}F be a distribution-valued function class with unbounded label set. We aim at estimating an \emph{unknown} function f∈Ff\in \mathcal{F}f∈F in an online fashion so that at time ttt when the context xt\boldsymbol{x}_txt​ is provided we can generate an estimate of f(xt)f(\boldsymbol{x}_t)f(xt​) under KL-divergence knowing only a privatized version of the true labels sampling from f(xt)f(\boldsymbol{x}_t)f(xt​). The ultimate objective is to minimize the cumulative KL-risk of a finite horizon TTT. We show that under (ϵ,0)(\epsilon,0)(ϵ,0)-local differential privacy of the privatized labels, the KL-risk grows as Θ~(1ϵKT)\tilde{\Theta}(\frac{1}{\epsilon}\sqrt{KT})Θ~(ϵ1​KT​) upto poly-logarithmic factors where K=∣F∣K=|\mathcal{F}|K=∣F∣. This is in stark contrast to the Θ~(Tlog⁡K)\tilde{\Theta}(\sqrt{T\log K})Θ~(TlogK​) bound demonstrated by Wu et al. (2023a) for bounded label sets. As a byproduct, our results recover a nearly tight upper bound for the hypothesis selection problem of gopi et al. (2020) established only for the batch setting.

View on arXiv
Comments on this paper