13
0

Towards Robust Deep Reinforcement Learning against Environmental State Perturbation

Abstract

Adversarial attacks and robustness in Deep Reinforcement Learning (DRL) have been widely studied in various threat models; however, few consider environmental state perturbations, which are natural in embodied scenarios. To improve the robustness of DRL agents, we formulate the problem of environmental state perturbation, introducing a preliminary non-targeted attack method as a calibration adversary, and then propose a defense framework, named Boosted Adversarial Training (BAT), which first tunes the agents via supervised learning to avoid catastrophic failure and subsequently adversarially trains the agent with reinforcement learning. Extensive experimental results substantiate the vulnerability of mainstream agents under environmental state perturbations and the effectiveness of our proposed attack. The defense results demonstrate that while existing robust reinforcement learning algorithms may not be suitable, our BAT framework can significantly enhance the robustness of agents against environmental state perturbations across various situations.

View on arXiv
@article{wang2025_2506.08961,
  title={ Towards Robust Deep Reinforcement Learning against Environmental State Perturbation },
  author={ Chenxu Wang and Huaping Liu },
  journal={arXiv preprint arXiv:2506.08961},
  year={ 2025 }
}
Comments on this paper