ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2312.12585
  4. Cited By
BadRL: Sparse Targeted Backdoor Attack Against Reinforcement Learning

BadRL: Sparse Targeted Backdoor Attack Against Reinforcement Learning

19 December 2023
Jing Cui
Yufei Han
Yuzhe Ma
Jianbin Jiao
Junge Zhang
    AAML
ArXivPDFHTML

Papers citing "BadRL: Sparse Targeted Backdoor Attack Against Reinforcement Learning"

7 / 7 papers shown
Title
BadMoE: Backdooring Mixture-of-Experts LLMs via Optimizing Routing Triggers and Infecting Dormant Experts
BadMoE: Backdooring Mixture-of-Experts LLMs via Optimizing Routing Triggers and Infecting Dormant Experts
Qingyue Wang
Qi Pang
Xixun Lin
Shuai Wang
Daoyuan Wu
MoE
62
0
0
24 Apr 2025
UNIDOOR: A Universal Framework for Action-Level Backdoor Attacks in Deep Reinforcement Learning
Oubo Ma
L. Du
Yang Dai
Chunyi Zhou
Qingming Li
Yuwen Pu
Shouling Ji
46
0
0
28 Jan 2025
Recent Advances in Attack and Defense Approaches of Large Language
  Models
Recent Advances in Attack and Defense Approaches of Large Language Models
Jing Cui
Yishi Xu
Zhewei Huang
Shuchang Zhou
Jianbin Jiao
Junge Zhang
PILM
AAML
57
1
0
05 Sep 2024
SleeperNets: Universal Backdoor Poisoning Attacks Against Reinforcement
  Learning Agents
SleeperNets: Universal Backdoor Poisoning Attacks Against Reinforcement Learning Agents
Ethan Rathbun
Christopher Amato
Alina Oprea
OffRL
AAML
46
4
0
30 May 2024
Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based
  Agents
Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based Agents
Wenkai Yang
Xiaohan Bi
Yankai Lin
Sishuo Chen
Jie Zhou
Xu Sun
LLMAG
AAML
44
56
0
17 Feb 2024
BACKDOORL: Backdoor Attack against Competitive Reinforcement Learning
BACKDOORL: Backdoor Attack against Competitive Reinforcement Learning
Lun Wang
Zaynah Javed
Xian Wu
Wenbo Guo
Masashi Sugiyama
D. Song
AAML
181
80
0
02 May 2021
Reward Poisoning in Reinforcement Learning: Attacks Against Unknown
  Learners in Unknown Environments
Reward Poisoning in Reinforcement Learning: Attacks Against Unknown Learners in Unknown Environments
Amin Rakhsha
Xuezhou Zhang
Xiaojin Zhu
Adish Singla
AAML
OffRL
44
37
0
16 Feb 2021
1