2
0

NavBench: A Unified Robotics Benchmark for Reinforcement Learning-Based Autonomous Navigation

Abstract

Autonomous robots must navigate and operate in diverse environments, from terrestrial and aquatic settings to aerial and space domains. While Reinforcement Learning (RL) has shown promise in training policies for specific autonomous robots, existing benchmarks are often constrained to unique platforms, limiting generalization and fair comparisons across different mobility systems. In this paper, we present NavBench, a multi-domain benchmark for training and evaluating RL-based navigation policies across diverse robotic platforms and operational environments. Built on IsaacLab, our framework standardizes task definitions, enabling different robots to tackle various navigation challenges without the need for ad-hoc task redesigns or custom evaluation metrics. Our benchmark addresses three key challenges: (1) Unified cross-medium benchmarking, enabling direct evaluation of diverse actuation methods (thrusters, wheels, water-based propulsion) in realistic environments; (2) Scalable and modular design, facilitating seamless robot-task interchangeability and reproducible training pipelines; and (3) Robust sim-to-real validation, demonstrated through successful policy transfer to multiple real-world robots, including a satellite robotic simulator, an unmanned surface vessel, and a wheeled ground vehicle. By ensuring consistency between simulation and real-world deployment, NavBench simplifies the development of adaptable RL-based navigation strategies. Its modular design allows researchers to easily integrate custom robots and tasks by following the framework's predefined templates, making it accessible for a wide range of applications. Our code is publicly available at NavBench.

View on arXiv
@article{el-hariry2025_2505.14526,
  title={ NavBench: A Unified Robotics Benchmark for Reinforcement Learning-Based Autonomous Navigation },
  author={ Matteo El-Hariry and Antoine Richard and Ricard M. Castan and Luis F. W. Batista and Matthieu Geist and Cedric Pradalier and Miguel Olivares-Mendez },
  journal={arXiv preprint arXiv:2505.14526},
  year={ 2025 }
}
Comments on this paper