ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1910.00482
36
8

Estimating Smooth GLM in Non-interactive Local Differential Privacy Model with Public Unlabeled Data

1 October 2019
Di Wang
Lijie Hu
Huanyu Zhang
Marco Gaboardi
Jinhui Xu
ArXivPDFHTML
Abstract

In this paper, we study the problem of estimating smooth Generalized Linear Models (GLMs) in the Non-interactive Local Differential Privacy (NLDP) model. Different from its classical setting, our model allows the server to access some additional public but unlabeled data. In the first part of the paper we focus on GLMs. Specifically, we first consider the case where each data record is i.i.d. sampled from a zero-mean multivariate Gaussian distribution. Motivated by the Stein's lemma, we present an (ϵ,δ)(\epsilon, \delta)(ϵ,δ)-NLDP algorithm for GLMs. Moreover, the sample complexity of public and private data for the algorithm to achieve an ℓ2\ell_2ℓ2​-norm estimation error of α\alphaα (with high probability) is O(pα−2){O}(p \alpha^{-2})O(pα−2) and O~(p3α−2ϵ−2)\tilde{O}(p^3\alpha^{-2}\epsilon^{-2})O~(p3α−2ϵ−2) respectively, where ppp is the dimension of the feature vector. This is a significant improvement over the previously known exponential or quasi-polynomial in α−1\alpha^{-1}α−1, or exponential in ppp sample complexities of GLMs with no public data. Then we consider a more general setting where each data record is i.i.d. sampled from some sub-Gaussian distribution with bounded ℓ1\ell_1ℓ1​-norm. Based on a variant of Stein's lemma, we propose an (ϵ,δ)(\epsilon, \delta)(ϵ,δ)-NLDP algorithm for GLMs whose sample complexity of public and private data to achieve an ℓ∞\ell_\inftyℓ∞​-norm estimation error of α\alphaα is O(p2α−2){O}(p^2\alpha^{-2})O(p2α−2) and O~(p2α−2ϵ−2)\tilde{O}(p^2\alpha^{-2}\epsilon^{-2})O~(p2α−2ϵ−2) respectively, under some mild assumptions and if α\alphaα is not too small ({\em i.e.,} α≥Ω(1p)\alpha\geq \Omega(\frac{1}{\sqrt{p}})α≥Ω(p​1​)). In the second part of the paper, we extend our idea to the problem of estimating non-linear regressions and show similar results as in GLMs for both multivariate Gaussian and sub-Gaussian cases. Finally, we demonstrate the effectiveness of our algorithms through experiments on both synthetic and real-world datasets.

View on arXiv
Comments on this paper