39
v1v2 (latest)

When Sensors Fail: Temporal Sequence Models for Robust PPO under Sensor Drift

Kevin Vogt-Lowell
Theodoros Tsiligkaridis
Rodney Lafuente-Mercado
Surabhi Ghatti
Shanghua Gao
Marinka Zitnik
Daniela Rus
Main:7 Pages
3 Figures
Bibliography:3 Pages
2 Tables
Appendix:4 Pages
Abstract

Real-world reinforcement learning systems must operate under distributional drift in their observation streams, yet most policy architectures implicitly assume fully observed and noise-free states. We study robustness of Proximal Policy Optimization (PPO) under temporally persistent sensor failures that induce partial observability and representation shift. To respond to this drift, we augment PPO with temporal sequence models, including Transformers and State Space Models (SSMs), to enable policies to infer missing information from history and maintain performance. Under a stochastic sensor failure process, we prove a high-probability bound on infinite-horizon reward degradation that quantifies how robustness depends on policy smoothness and failure persistence. Empirically, on MuJoCo continuous-control benchmarks with severe sensor dropout, we show Transformer-based sequence policies substantially outperform MLP, RNN, and SSM baselines in robustness, maintaining high returns even when large fractions of sensors are unavailable. These results demonstrate that temporal sequence reasoning provides a principled and practical mechanism for reliable operation under observation drift caused by sensor unreliability.

View on arXiv
Comments on this paper