ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.07798
  4. Cited By
Poisoning Deep Reinforcement Learning Agents with In-Distribution
  Triggers

Poisoning Deep Reinforcement Learning Agents with In-Distribution Triggers

14 June 2021
C. Ashcraft
Kiran Karra
ArXivPDFHTML

Papers citing "Poisoning Deep Reinforcement Learning Agents with In-Distribution Triggers"

14 / 14 papers shown
Title
Investigating the Treacherous Turn in Deep Reinforcement Learning
Investigating the Treacherous Turn in Deep Reinforcement Learning
C. Ashcraft
Kiran Karra
Josh Carney
Nathan G. Drenkow
21
1
0
11 Apr 2025
Online Poisoning Attack Against Reinforcement Learning under Black-box
  Environments
Online Poisoning Attack Against Reinforcement Learning under Black-box Environments
Jianhui Li
Bokang Zhang
Junfeng Wu
AAML
OffRL
OnRL
98
1
0
01 Dec 2024
A Spatiotemporal Stealthy Backdoor Attack against Cooperative
  Multi-Agent Deep Reinforcement Learning
A Spatiotemporal Stealthy Backdoor Attack against Cooperative Multi-Agent Deep Reinforcement Learning
Yinbo Yu
Saihao Yan
Jiajia Liu
AAML
25
1
0
12 Sep 2024
Mitigating Deep Reinforcement Learning Backdoors in the Neural
  Activation Space
Mitigating Deep Reinforcement Learning Backdoors in the Neural Activation Space
Sanyam Vyas
Chris Hicks
V. Mavroudis
AAML
39
0
0
21 Jul 2024
The last Dance : Robust backdoor attack via diffusion models and
  bayesian approach
The last Dance : Robust backdoor attack via diffusion models and bayesian approach
Orson Mengara
DiffM
37
4
0
05 Feb 2024
Poisoning Web-Scale Training Datasets is Practical
Poisoning Web-Scale Training Datasets is Practical
Nicholas Carlini
Matthew Jagielski
Christopher A. Choquette-Choo
Daniel Paleka
Will Pearce
Hyrum S. Anderson
Andreas Terzis
Kurt Thomas
Florian Tramèr
SILM
31
182
0
20 Feb 2023
A Survey on Reinforcement Learning Security with Application to
  Autonomous Driving
A Survey on Reinforcement Learning Security with Application to Autonomous Driving
Ambra Demontis
Maura Pintor
Luca Demetrio
Kathrin Grosse
Hsiao-Ying Lin
Chengfang Fang
Battista Biggio
Fabio Roli
AAML
42
4
0
12 Dec 2022
Don't Watch Me: A Spatio-Temporal Trojan Attack on
  Deep-Reinforcement-Learning-Augment Autonomous Driving
Don't Watch Me: A Spatio-Temporal Trojan Attack on Deep-Reinforcement-Learning-Augment Autonomous Driving
Yinbo Yu
Jiajia Liu
21
1
0
22 Nov 2022
Backdoor Attacks on Multiagent Collaborative Systems
Backdoor Attacks on Multiagent Collaborative Systems
Shuo Chen
Yue Qiu
Jie Zhang
AAML
32
3
0
21 Nov 2022
Adversarial Cheap Talk
Adversarial Cheap Talk
Chris Xiaoxuan Lu
Timon Willi
Alistair Letcher
Jakob N. Foerster
AAML
24
17
0
20 Nov 2022
BAFFLE: Hiding Backdoors in Offline Reinforcement Learning Datasets
BAFFLE: Hiding Backdoors in Offline Reinforcement Learning Datasets
Chen Gong
Zhou Yang
Yunru Bai
Junda He
Jieke Shi
...
Arunesh Sinha
Bowen Xu
Xinwen Hou
David Lo
Guoliang Fan
AAML
OffRL
21
7
0
07 Oct 2022
A Temporal-Pattern Backdoor Attack to Deep Reinforcement Learning
A Temporal-Pattern Backdoor Attack to Deep Reinforcement Learning
Yinbo Yu
Jiajia Liu
Shouqing Li
Ke Huang
Xudong Feng
AAML
39
11
0
05 May 2022
Wild Patterns Reloaded: A Survey of Machine Learning Security against
  Training Data Poisoning
Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning
Antonio Emanuele Cinà
Kathrin Grosse
Ambra Demontis
Sebastiano Vascon
Werner Zellinger
Bernhard A. Moser
Alina Oprea
Battista Biggio
Marcello Pelillo
Fabio Roli
AAML
22
117
0
04 May 2022
Backdoor Learning: A Survey
Backdoor Learning: A Survey
Yiming Li
Yong Jiang
Zhifeng Li
Shutao Xia
AAML
45
589
0
17 Jul 2020
1