ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2208.13663
  4. Cited By
Understanding the Limits of Poisoning Attacks in Episodic Reinforcement
  Learning

Understanding the Limits of Poisoning Attacks in Episodic Reinforcement Learning

29 August 2022
A. Rangi
Haifeng Xu
Long Tran-Thanh
M. Franceschetti
    AAML
    OffRL
ArXivPDFHTML

Papers citing "Understanding the Limits of Poisoning Attacks in Episodic Reinforcement Learning"

5 / 5 papers shown
Title
SleeperNets: Universal Backdoor Poisoning Attacks Against Reinforcement
  Learning Agents
SleeperNets: Universal Backdoor Poisoning Attacks Against Reinforcement Learning Agents
Ethan Rathbun
Christopher Amato
Alina Oprea
OffRL
AAML
46
4
0
30 May 2024
Nearly Minimax Optimal Regret for Learning Linear Mixture Stochastic
  Shortest Path
Nearly Minimax Optimal Regret for Learning Linear Mixture Stochastic Shortest Path
Qiwei Di
Jiafan He
Dongruo Zhou
Quanquan Gu
33
2
0
14 Feb 2024
Implicit Poisoning Attacks in Two-Agent Reinforcement Learning:
  Adversarial Policies for Training-Time Attacks
Implicit Poisoning Attacks in Two-Agent Reinforcement Learning: Adversarial Policies for Training-Time Attacks
Mohammad Mohammadi
Jonathan Nöther
Debmalya Mandal
Adish Singla
Goran Radanović
AAML
OffRL
35
9
0
27 Feb 2023
Reward Poisoning in Reinforcement Learning: Attacks Against Unknown
  Learners in Unknown Environments
Reward Poisoning in Reinforcement Learning: Attacks Against Unknown Learners in Unknown Environments
Amin Rakhsha
Xuezhou Zhang
Xiaojin Zhu
Adish Singla
AAML
OffRL
44
37
0
16 Feb 2021
Defense Against Reward Poisoning Attacks in Reinforcement Learning
Defense Against Reward Poisoning Attacks in Reinforcement Learning
Kiarash Banihashem
Adish Singla
Goran Radanović
AAML
32
26
0
10 Feb 2021
1