ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2112.01585
40
11

Differentially Private Exploration in Reinforcement Learning with Linear Representation

2 December 2021
Paul Luyo
Evrard Garcelon
A. Lazaric
Matteo Pirotta
ArXivPDFHTML
Abstract

This paper studies privacy-preserving exploration in Markov Decision Processes (MDPs) with linear representation. We first consider the setting of linear-mixture MDPs (Ayoub et al., 2020) (a.k.a.\ model-based setting) and provide an unified framework for analyzing joint and local differential private (DP) exploration. Through this framework, we prove a O~(K3/4/ϵ)\widetilde{O}(K^{3/4}/\sqrt{\epsilon})O(K3/4/ϵ​) regret bound for (ϵ,δ)(\epsilon,\delta)(ϵ,δ)-local DP exploration and a O~(K/ϵ)\widetilde{O}(\sqrt{K/\epsilon})O(K/ϵ​) regret bound for (ϵ,δ)(\epsilon,\delta)(ϵ,δ)-joint DP. We further study privacy-preserving exploration in linear MDPs (Jin et al., 2020) (a.k.a.\ model-free setting) where we provide a O~(K35/ϵ25)\widetilde{O}\left(K^{\frac{3}{5}}/\epsilon^{\frac{2}{5}}\right)O(K53​/ϵ52​) regret bound for (ϵ,δ)(\epsilon,\delta)(ϵ,δ)-joint DP, with a novel algorithm based on low-switching. Finally, we provide insights into the issues of designing local DP algorithms in this model-free setting.

View on arXiv
Comments on this paper