ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1902.09972
  4. Cited By
Design of intentional backdoors in sequential models

Design of intentional backdoors in sequential models

26 February 2019
Zhaoyuan Yang
N. Iyer
Johan Reimann
Nurali Virani
    SILM
    AAML
ArXivPDFHTML

Papers citing "Design of intentional backdoors in sequential models"

9 / 9 papers shown
Title
UNIDOOR: A Universal Framework for Action-Level Backdoor Attacks in Deep Reinforcement Learning
Oubo Ma
L. Du
Yang Dai
Chunyi Zhou
Qingming Li
Yuwen Pu
Shouling Ji
46
0
0
28 Jan 2025
SleeperNets: Universal Backdoor Poisoning Attacks Against Reinforcement
  Learning Agents
SleeperNets: Universal Backdoor Poisoning Attacks Against Reinforcement Learning Agents
Ethan Rathbun
Christopher Amato
Alina Oprea
OffRL
AAML
46
4
0
30 May 2024
Implicit Poisoning Attacks in Two-Agent Reinforcement Learning:
  Adversarial Policies for Training-Time Attacks
Implicit Poisoning Attacks in Two-Agent Reinforcement Learning: Adversarial Policies for Training-Time Attacks
Mohammad Mohammadi
Jonathan Nöther
Debmalya Mandal
Adish Singla
Goran Radanović
AAML
OffRL
35
9
0
27 Feb 2023
Don't Watch Me: A Spatio-Temporal Trojan Attack on
  Deep-Reinforcement-Learning-Augment Autonomous Driving
Don't Watch Me: A Spatio-Temporal Trojan Attack on Deep-Reinforcement-Learning-Augment Autonomous Driving
Yinbo Yu
Jiajia Liu
18
1
0
22 Nov 2022
BAFFLE: Hiding Backdoors in Offline Reinforcement Learning Datasets
BAFFLE: Hiding Backdoors in Offline Reinforcement Learning Datasets
Chen Gong
Zhou Yang
Yunru Bai
Junda He
Jieke Shi
...
Arunesh Sinha
Bowen Xu
Xinwen Hou
David Lo
Guoliang Fan
AAML
OffRL
21
7
0
07 Oct 2022
A Temporal-Pattern Backdoor Attack to Deep Reinforcement Learning
A Temporal-Pattern Backdoor Attack to Deep Reinforcement Learning
Yinbo Yu
Jiajia Liu
Shouqing Li
Ke Huang
Xudong Feng
AAML
39
11
0
05 May 2022
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks,
  and Defenses
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses
Micah Goldblum
Dimitris Tsipras
Chulin Xie
Xinyun Chen
Avi Schwarzschild
D. Song
A. Madry
Bo-wen Li
Tom Goldstein
SILM
27
270
0
18 Dec 2020
Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive
  Review
Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review
Yansong Gao
Bao Gia Doan
Zhi-Li Zhang
Siqi Ma
Jiliang Zhang
Anmin Fu
Surya Nepal
Hyoungshick Kim
AAML
36
220
0
21 Jul 2020
A backdoor attack against LSTM-based text classification systems
A backdoor attack against LSTM-based text classification systems
Jiazhu Dai
Chuanshuai Chen
SILM
8
318
0
29 May 2019
1