ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.08336
42
0
v1v2 (latest)

Your Agent Can Defend Itself against Backdoor Attacks

10 June 2025
Li Changjiang
Liang Jiacheng
Cao Bochuan
Chen Jinghui
Wang Ting
    AAMLLLMAG
ArXiv (abs)PDFHTML
Main:8 Pages
11 Figures
Bibliography:2 Pages
9 Tables
Appendix:6 Pages
Abstract

Despite their growing adoption across domains, large language model (LLM)-powered agents face significant security risks from backdoor attacks during training and fine-tuning. These compromised agents can subsequently be manipulated to execute malicious operations when presented with specific triggers in their inputs or environments. To address this pressing risk, we present ReAgent, a novel defense against a range of backdoor attacks on LLM-based agents. Intuitively, backdoor attacks often result in inconsistencies among the user's instruction, the agent's planning, and its execution. Drawing on this insight, ReAgent employs a two-level approach to detect potential backdoors. At the execution level, ReAgent verifies consistency between the agent's thoughts and actions; at the planning level, ReAgent leverages the agent's capability to reconstruct the instruction based on its thought trajectory, checking for consistency between the reconstructed instruction and the user's instruction. Extensive evaluation demonstrates ReAgent's effectiveness against various backdoor attacks across tasks. For instance, ReAgent reduces the attack success rate by up to 90\% in database operation tasks, outperforming existing defenses by large margins. This work reveals the potential of utilizing compromised agents themselves to mitigate backdoor risks.

View on arXiv
@article{changjiang2025_2506.08336,
  title={ Your Agent Can Defend Itself against Backdoor Attacks },
  author={ Li Changjiang and Liang Jiacheng and Cao Bochuan and Chen Jinghui and Wang Ting },
  journal={arXiv preprint arXiv:2506.08336},
  year={ 2025 }
}
Comments on this paper