34
2

Stability Analysis of Deep Reinforcement Learning for Multi-Agent Inspection in a Terrestrial Testbed

Abstract

The design and deployment of autonomous systems for space missions require robust solutions to navigate strict reliability constraints, extended operational duration, and communication challenges. This study evaluates the stability and performance of a hierarchical deep reinforcement learning (DRL) framework designed for multi-agent satellite inspection tasks. The proposed framework integrates a high-level guidance policy with a low-level motion controller, enabling scalable task allocation and efficient trajectory execution. Experiments conducted on the Local Intelligent Network of Collaborative Satellites (LINCS) testbed assess the framework's performance under varying levels of fidelity, from simulated environments to a cyber-physical testbed. Key metrics, including task completion rate, distance traveled, and fuel consumption, highlight the framework's robustness and adaptability despite real-world uncertainties such as sensor noise, dynamic perturbations, and runtime assurance (RTA) constraints. The results demonstrate that the hierarchical controller effectively bridges the sim-to-real gap, maintaining high task completion rates while adapting to the complexities of real-world environments. These findings validate the framework's potential for enabling autonomous satellite operations in future space missions.

View on arXiv
@article{lei2025_2503.00056,
  title={ Stability Analysis of Deep Reinforcement Learning for Multi-Agent Inspection in a Terrestrial Testbed },
  author={ Henry Lei and Zachary S. Lippay and Anonto Zaman and Joshua Aurand and Amin Maghareh and Sean Phillips },
  journal={arXiv preprint arXiv:2503.00056},
  year={ 2025 }
}
Comments on this paper