53

On the Price of Privacy for Language Identification and Generation

Xiaoyu Li
Andi Han
Jiaojiao Jiang
Junbin Gao
Main:12 Pages
Bibliography:5 Pages
1 Tables
Appendix:31 Pages
Abstract

As large language models (LLMs) are increasingly trained on sensitive user data, understanding the fundamental cost of privacy in language learning becomes essential. We initiate the study of differentially private (DP) language identification and generation in the agnostic statistical setting, establishing algorithms and matching lower bounds that precisely quantify the cost of privacy. For both tasks, approximate (ε,δ)(\varepsilon, \delta)-DP with constant ε>0\varepsilon > 0 recovers the non-private error rates: exp(r(n))\exp(-r(n)) for identification (for any r(n)=o(n)r(n) = o(n)) and exp(Ω(n))\exp(-\Omega(n)) for generation. Under pure ε\varepsilon-DP, the exponents degrade by a multiplicative factor of min{1,ε}\min\{1, \varepsilon\}, which we show is tight up to constants. Notably, for generation under pure DP with mild assumptions, the upper bound exp(min{1,ε}Ω(n))\exp(-\min\{1,\varepsilon\} \cdot \Omega(n)) matches the lower bound up to some constants, establishing an optimal rate. Our results show that the cost of privacy in language learning is surprisingly mild: absent entirely under approximate DP, and exactly a min{1,ε}\min\{1,\varepsilon\} factor in the exponent under pure DP.

View on arXiv
Comments on this paper