KL-regularization Itself is Differentially Private in Bandits and RLHF

Differential Privacy (DP) provides a rigorous framework for privacy, ensuring the outputs of data-driven algorithms remain statistically indistinguishable across datasets that differ in a single entry. While guaranteeing DP generally requires explicitly injecting noise either to the algorithm itself or to its outputs, the intrinsic randomness of existing algorithms presents an opportunity to achieve DP ``for free''. In this work, we explore the role of regularization in achieving DP across three different decision-making problems: multi-armed bandits, linear contextual bandits, and reinforcement learning from human feedback (RLHF), in offline data settings. We show that adding KL-regularization to the learning objective (a common approach in optimization algorithms) makes the action sampled from the resulting stochastic policy itself differentially private. This offers a new route to privacy guarantees without additional noise injection, while also preserving the inherent advantage of regularization in enhancing performance.
View on arXiv@article{zhang2025_2505.18407, title={ KL-regularization Itself is Differentially Private in Bandits and RLHF }, author={ Yizhou Zhang and Kishan Panaganti and Laixi Shi and Juba Ziani and Adam Wierman }, journal={arXiv preprint arXiv:2505.18407}, year={ 2025 } }