ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2003.12613
  4. Cited By
Adaptive Reward-Poisoning Attacks against Reinforcement Learning

Adaptive Reward-Poisoning Attacks against Reinforcement Learning

27 March 2020
Xuezhou Zhang
Yuzhe Ma
Adish Singla
Xiaojin Zhu
    AAML
ArXivPDFHTML

Papers citing "Adaptive Reward-Poisoning Attacks against Reinforcement Learning"

34 / 34 papers shown
Title
Reinforcement Teaching
Reinforcement Teaching
Alex Lewandowski
Calarina Muslimani
Dale Schuurmans
Matthew E. Taylor
Jun Luo
81
1
0
28 Jan 2025
Position: A taxonomy for reporting and describing AI security incidents
Position: A taxonomy for reporting and describing AI security incidents
L. Bieringer
Kevin Paeth
Andreas Wespi
Kathrin Grosse
Alexandre Alahi
Kathrin Grosse
78
0
0
19 Dec 2024
Stealthy Adversarial Attacks on Stochastic Multi-Armed Bandits
Stealthy Adversarial Attacks on Stochastic Multi-Armed Bandits
Zhiwei Wang
Huazheng Wang
Hongning Wang
AAML
43
0
0
21 Feb 2024
PGN: A perturbation generation network against deep reinforcement
  learning
PGN: A perturbation generation network against deep reinforcement learning
Xiangjuan Li
Feifan Li
Yang Li
Quanbiao Pan
AAML
17
2
0
20 Dec 2023
RLHFPoison: Reward Poisoning Attack for Reinforcement Learning with
  Human Feedback in Large Language Models
RLHFPoison: Reward Poisoning Attack for Reinforcement Learning with Human Feedback in Large Language Models
Jiong Wang
Junlin Wu
Muhao Chen
Yevgeniy Vorobeychik
Chaowei Xiao
AAML
21
12
0
16 Nov 2023
BRNES: Enabling Security and Privacy-aware Experience Sharing in
  Multiagent Robotic and Autonomous Systems
BRNES: Enabling Security and Privacy-aware Experience Sharing in Multiagent Robotic and Autonomous Systems
Md Tamjid Hossain
Hung M. La
S. Badsha
Anton Netchaev
38
2
0
02 Aug 2023
A Reminder of its Brittleness: Language Reward Shaping May Hinder
  Learning for Instruction Following Agents
A Reminder of its Brittleness: Language Reward Shaping May Hinder Learning for Instruction Following Agents
Sukai Huang
N. Lipovetzky
Trevor Cohn
30
2
0
26 May 2023
Black-Box Targeted Reward Poisoning Attack Against Online Deep
  Reinforcement Learning
Black-Box Targeted Reward Poisoning Attack Against Online Deep Reinforcement Learning
Yinglun Xu
Gagandeep Singh
OffRL
AAML
26
2
0
18 May 2023
Policy Resilience to Environment Poisoning Attacks on Reinforcement
  Learning
Policy Resilience to Environment Poisoning Attacks on Reinforcement Learning
Hang Xu
Xinghua Qu
Zinovi Rabinovich
26
1
0
24 Apr 2023
Implicit Poisoning Attacks in Two-Agent Reinforcement Learning:
  Adversarial Policies for Training-Time Attacks
Implicit Poisoning Attacks in Two-Agent Reinforcement Learning: Adversarial Policies for Training-Time Attacks
Mohammad Mohammadi
Jonathan Nöther
Debmalya Mandal
Adish Singla
Goran Radanović
AAML
OffRL
27
9
0
27 Feb 2023
New Challenges in Reinforcement Learning: A Survey of Security and
  Privacy
New Challenges in Reinforcement Learning: A Survey of Security and Privacy
Yunjiao Lei
Dayong Ye
Sheng Shen
Yulei Sui
Tianqing Zhu
Wanlei Zhou
33
18
0
31 Dec 2022
A Survey on Reinforcement Learning Security with Application to
  Autonomous Driving
A Survey on Reinforcement Learning Security with Application to Autonomous Driving
Ambra Demontis
Maura Pintor
Luca Demetrio
Kathrin Grosse
Hsiao-Ying Lin
Chengfang Fang
Battista Biggio
Fabio Roli
AAML
42
4
0
12 Dec 2022
Efficient Adversarial Training without Attacking: Worst-Case-Aware
  Robust Reinforcement Learning
Efficient Adversarial Training without Attacking: Worst-Case-Aware Robust Reinforcement Learning
Yongyuan Liang
Yanchao Sun
Ruijie Zheng
Furong Huang
OOD
AAML
OffRL
20
47
0
12 Oct 2022
Trustworthy Reinforcement Learning Against Intrinsic Vulnerabilities:
  Robustness, Safety, and Generalizability
Trustworthy Reinforcement Learning Against Intrinsic Vulnerabilities: Robustness, Safety, and Generalizability
Mengdi Xu
Zuxin Liu
Peide Huang
Wenhao Ding
Zhepeng Cen
Bo-wen Li
Ding Zhao
74
45
0
16 Sep 2022
Reward Delay Attacks on Deep Reinforcement Learning
Reward Delay Attacks on Deep Reinforcement Learning
Anindya Sarkar
Jiarui Feng
Yevgeniy Vorobeychik
Christopher Gill
Ning Zhang
AAML
13
6
0
08 Sep 2022
Sampling Attacks on Meta Reinforcement Learning: A Minimax Formulation
  and Complexity Analysis
Sampling Attacks on Meta Reinforcement Learning: A Minimax Formulation and Complexity Analysis
Tao Li
Haozhe Lei
Quanyan Zhu
AAML
29
7
0
29 Jul 2022
A Search-Based Testing Approach for Deep Reinforcement Learning Agents
A Search-Based Testing Approach for Deep Reinforcement Learning Agents
Amirhossein Zolfagharian
Manel Abdellatif
Lionel C. Briand
M. Bagherzadeh
Ramesh S
39
27
0
15 Jun 2022
Byzantine-Robust Online and Offline Distributed Reinforcement Learning
Byzantine-Robust Online and Offline Distributed Reinforcement Learning
Yiding Chen
Xuezhou Zhang
Kaipeng Zhang
Mengdi Wang
Xiaojin Zhu
OffRL
20
16
0
01 Jun 2022
Efficient Reward Poisoning Attacks on Online Deep Reinforcement Learning
Efficient Reward Poisoning Attacks on Online Deep Reinforcement Learning
Yinglun Xu
Qi Zeng
Gagandeep Singh
AAML
30
5
0
30 May 2022
COPA: Certifying Robust Policies for Offline Reinforcement Learning
  against Poisoning Attacks
COPA: Certifying Robust Policies for Offline Reinforcement Learning against Poisoning Attacks
Fan Wu
Linyi Li
Chejian Xu
Huan Zhang
B. Kailkhura
K. Kenthapadi
Ding Zhao
Bo-wen Li
AAML
OffRL
24
34
0
16 Mar 2022
Reinforcement Learning for Linear Quadratic Control is Vulnerable Under
  Cost Manipulation
Reinforcement Learning for Linear Quadratic Control is Vulnerable Under Cost Manipulation
Yunhan Huang
Quanyan Zhu
OffRL
AAML
34
4
0
11 Mar 2022
Efficient Action Poisoning Attacks on Linear Contextual Bandits
Efficient Action Poisoning Attacks on Linear Contextual Bandits
Guanlin Liu
Lifeng Lai
AAML
33
4
0
10 Dec 2021
Reward-Free Attacks in Multi-Agent Reinforcement Learning
Reward-Free Attacks in Multi-Agent Reinforcement Learning
Ted Fujimoto
T. Doster
A. Attarian
Jill M. Brandenberger
Nathan Oken Hodas
AAML
19
4
0
02 Dec 2021
Adversarial Attacks in Cooperative AI
Adversarial Attacks in Cooperative AI
Ted Fujimoto
Arthur Paul Pedersen
AAML
24
2
0
29 Nov 2021
Iterative Teaching by Label Synthesis
Iterative Teaching by Label Synthesis
Weiyang Liu
Zhen Liu
Hanchen Wang
Liam Paull
Bernhard Schölkopf
Adrian Weller
48
16
0
27 Oct 2021
When Are Linear Stochastic Bandits Attackable?
When Are Linear Stochastic Bandits Attackable?
Huazheng Wang
Haifeng Xu
Hongning Wang
AAML
31
10
0
18 Oct 2021
Game Redesign in No-regret Game Playing
Game Redesign in No-regret Game Playing
Yuzhe Ma
Young Wu
Xiaojin Zhu
19
10
0
18 Oct 2021
Provably Efficient Black-Box Action Poisoning Attacks Against
  Reinforcement Learning
Provably Efficient Black-Box Action Poisoning Attacks Against Reinforcement Learning
Guanlin Liu
Lifeng Lai
AAML
32
34
0
09 Oct 2021
Advances in adversarial attacks and defenses in computer vision: A
  survey
Advances in adversarial attacks and defenses in computer vision: A survey
Naveed Akhtar
Ajmal Saeed Mian
Navid Kardan
M. Shah
AAML
26
235
0
01 Aug 2021
Reinforcement Learning for Feedback-Enabled Cyber Resilience
Reinforcement Learning for Feedback-Enabled Cyber Resilience
Yunhan Huang
Linan Huang
Quanyan Zhu
16
66
0
02 Jul 2021
Reward Poisoning in Reinforcement Learning: Attacks Against Unknown
  Learners in Unknown Environments
Reward Poisoning in Reinforcement Learning: Attacks Against Unknown Learners in Unknown Environments
Amin Rakhsha
Xuezhou Zhang
Xiaojin Zhu
Adish Singla
AAML
OffRL
38
37
0
16 Feb 2021
Defense Against Reward Poisoning Attacks in Reinforcement Learning
Defense Against Reward Poisoning Attacks in Reinforcement Learning
Kiarash Banihashem
Adish Singla
Goran Radanović
AAML
27
26
0
10 Feb 2021
Policy Teaching in Reinforcement Learning via Environment Poisoning
  Attacks
Policy Teaching in Reinforcement Learning via Environment Poisoning Attacks
Amin Rakhsha
Goran Radanović
R. Devidze
Xiaojin Zhu
Adish Singla
AAML
OffRL
28
29
0
21 Nov 2020
Deep Reinforcement Learning for Dialogue Generation
Deep Reinforcement Learning for Dialogue Generation
Jiwei Li
Will Monroe
Alan Ritter
Michel Galley
Jianfeng Gao
Dan Jurafsky
214
1,326
0
05 Jun 2016
1