ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1909.09630
23
93

Manipulation Attacks in Local Differential Privacy

20 September 2019
Albert Cheu
Adam D. Smith
Jonathan R. Ullman
ArXivPDFHTML
Abstract

Local differential privacy is a widely studied restriction on distributed algorithms that collect aggregates about sensitive user data, and is now deployed in several large systems. We initiate a systematic study of a fundamental limitation of locally differentially private protocols: they are highly vulnerable to adversarial manipulation. While any algorithm can be manipulated by adversaries who lie about their inputs, we show that any non-interactive locally differentially private protocol can be manipulated to a much greater extent. Namely, when the privacy level is high or the input domain is large, an attacker who controls a small fraction of the users in the protocol can completely obscure the distribution of the users' inputs. We also show that existing protocols differ greatly in their resistance to manipulation, even when they offer the same accuracy guarantee with honest execution. Our results suggest caution when deploying local differential privacy and reinforce the importance of efficient cryptographic techniques for emulating mechanisms from central differential privacy in distributed settings.

View on arXiv
Comments on this paper