ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2302.13851
  4. Cited By
Implicit Poisoning Attacks in Two-Agent Reinforcement Learning:
  Adversarial Policies for Training-Time Attacks

Implicit Poisoning Attacks in Two-Agent Reinforcement Learning: Adversarial Policies for Training-Time Attacks

27 February 2023
Mohammad Mohammadi
Jonathan Nöther
Debmalya Mandal
Adish Singla
Goran Radanović
    AAML
    OffRL
ArXivPDFHTML

Papers citing "Implicit Poisoning Attacks in Two-Agent Reinforcement Learning: Adversarial Policies for Training-Time Attacks"

8 / 8 papers shown
Title
UNIDOOR: A Universal Framework for Action-Level Backdoor Attacks in Deep Reinforcement Learning
Oubo Ma
L. Du
Yang Dai
Chunyi Zhou
Qingming Li
Yuwen Pu
Shouling Ji
46
0
0
28 Jan 2025
Corruption-Robust Offline Two-Player Zero-Sum Markov Games
Corruption-Robust Offline Two-Player Zero-Sum Markov Games
Andi Nika
Debmalya Mandal
Adish Singla
Goran Radanović
OffRL
34
1
0
04 Mar 2024
Performative Reinforcement Learning in Gradually Shifting Environments
Performative Reinforcement Learning in Gradually Shifting Environments
Ben Rank
Stelios Triantafyllou
Debmalya Mandal
Goran Radanović
OffRL
29
6
0
15 Feb 2024
Optimal Cost Constrained Adversarial Attacks For Multiple Agent Systems
Optimal Cost Constrained Adversarial Attacks For Multiple Agent Systems
Ziqing Lu
Guanlin Liu
Lifeng Lai
Weiyu Xu
AAML
24
2
0
01 Nov 2023
Efficient Adversarial Attacks on Online Multi-agent Reinforcement
  Learning
Efficient Adversarial Attacks on Online Multi-agent Reinforcement Learning
Guanlin Liu
Lifeng Lai
AAML
38
6
0
15 Jul 2023
Hiding in Plain Sight: Differential Privacy Noise Exploitation for
  Evasion-resilient Localized Poisoning Attacks in Multiagent Reinforcement
  Learning
Hiding in Plain Sight: Differential Privacy Noise Exploitation for Evasion-resilient Localized Poisoning Attacks in Multiagent Reinforcement Learning
Md Tamjid Hossain
Hung M. La
AAML
16
0
0
01 Jul 2023
BACKDOORL: Backdoor Attack against Competitive Reinforcement Learning
BACKDOORL: Backdoor Attack against Competitive Reinforcement Learning
Lun Wang
Zaynah Javed
Xian Wu
Wenbo Guo
Xinyu Xing
D. Song
AAML
166
64
0
02 May 2021
Robust Reinforcement Learning on State Observations with Learned Optimal
  Adversary
Robust Reinforcement Learning on State Observations with Learned Optimal Adversary
Huan Zhang
Hongge Chen
Duane S. Boning
Cho-Jui Hsieh
67
162
0
21 Jan 2021
1