ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.07778
24
32

Local Differential Privacy for Regret Minimization in Reinforcement Learning

15 October 2020
Evrard Garcelon
Vianney Perchet
Ciara Pike-Burke
Matteo Pirotta
ArXivPDFHTML
Abstract

Reinforcement learning algorithms are widely used in domains where it is desirable to provide a personalized service. In these domains it is common that user data contains sensitive information that needs to be protected from third parties. Motivated by this, we study privacy in the context of finite-horizon Markov Decision Processes (MDPs) by requiring information to be obfuscated on the user side. We formulate this notion of privacy for RL by leveraging the local differential privacy (LDP) framework. We establish a lower bound for regret minimization in finite-horizon MDPs with LDP guarantees which shows that guaranteeing privacy has a multiplicative effect on the regret. This result shows that while LDP is an appealing notion of privacy, it makes the learning problem significantly more complex. Finally, we present an optimistic algorithm that simultaneously satisfies ε\varepsilonε-LDP requirements, and achieves K/ε\sqrt{K}/\varepsilonK​/ε regret in any finite-horizon MDP after KKK episodes, matching the lower bound dependency on the number of episodes KKK.

View on arXiv
Comments on this paper