120
10

Privacy Auditing of Large Language Models

Abstract

Current techniques for privacy auditing of large language models (LLMs) have limited efficacy -- they rely on basic approaches to generate canaries which leads to weak membership inference attacks that in turn give loose lower bounds on the empirical privacy leakage. We develop canaries that are far more effective than those used in prior work under threat models that cover a range of realistic settings. We demonstrate through extensive experiments on multiple families of fine-tuned LLMs that our approach sets a new standard for detection of privacy leakage. For measuring the memorization rate of non-privately trained LLMs, our designed canaries surpass prior approaches. For example, on the Qwen2.5-0.5B model, our designed canaries achieve 49.6%49.6\% TPR at 1%1\% FPR, vastly surpassing the prior approach's 4.2%4.2\% TPR at 1%1\% FPR. Our method can be used to provide a privacy audit of ε1\varepsilon \approx 1 for a model trained with theoretical ε\varepsilon of 4. To the best of our knowledge, this is the first time that a privacy audit of LLM training has achieved nontrivial auditing success in the setting where the attacker cannot train shadow models, insert gradient canaries, or access the model at every iteration.

View on arXiv
@article{panda2025_2503.06808,
  title={ Privacy Auditing of Large Language Models },
  author={ Ashwinee Panda and Xinyu Tang and Milad Nasr and Christopher A. Choquette-Choo and Prateek Mittal },
  journal={arXiv preprint arXiv:2503.06808},
  year={ 2025 }
}
Comments on this paper

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.