ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2003.12613
13
124

Adaptive Reward-Poisoning Attacks against Reinforcement Learning

27 March 2020
Xuezhou Zhang
Yuzhe Ma
Adish Singla
Xiaojin Zhu
    AAML
ArXivPDFHTML
Abstract

In reward-poisoning attacks against reinforcement learning (RL), an attacker can perturb the environment reward rtr_trt​ into rt+δtr_t+\delta_trt​+δt​ at each step, with the goal of forcing the RL agent to learn a nefarious policy. We categorize such attacks by the infinity-norm constraint on δt\delta_tδt​: We provide a lower threshold below which reward-poisoning attack is infeasible and RL is certified to be safe; we provide a corresponding upper threshold above which the attack is feasible. Feasible attacks can be further categorized as non-adaptive where δt\delta_tδt​ depends only on (st,at,st+1)(s_t,a_t, s_{t+1})(st​,at​,st+1​), or adaptive where δt\delta_tδt​ depends further on the RL agent's learning process at time ttt. Non-adaptive attacks have been the focus of prior works. However, we show that under mild conditions, adaptive attacks can achieve the nefarious policy in steps polynomial in state-space size ∣S∣|S|∣S∣, whereas non-adaptive attacks require exponential steps. We provide a constructive proof that a Fast Adaptive Attack strategy achieves the polynomial rate. Finally, we show that empirically an attacker can find effective reward-poisoning attacks using state-of-the-art deep RL techniques.

View on arXiv
Comments on this paper