Autonomous Exploration and Semantic Updating of Large-Scale Indoor Environments with Mobile Robots

We introduce a new robotic system that enables a mobile robot to autonomously explore an unknown environment, build a semantic map of the environment, and subsequently update the semantic map to reflect environment changes, such as location changes of objects. Our system leverages a LiDAR scanner for 2D occupancy grid mapping and an RGB-D camera for object perception. We introduce a semantic map representation that combines a 2D occupancy grid map for geometry with a topological map for object semantics. This map representation enables us to effectively update the semantics by deleting or adding nodes to the topological map. Our system has been tested on a Fetch robot, semantically mapping a 93m x 90m and a 9m x 13m indoor environment and updating their semantic maps once objects are moved in the environments
View on arXiv@article{allu2025_2409.15493, title={ Autonomous Exploration and Semantic Updating of Large-Scale Indoor Environments with Mobile Robots }, author={ Sai Haneesh Allu and Itay Kadosh and Tyler Summers and Yu Xiang }, journal={arXiv preprint arXiv:2409.15493}, year={ 2025 } }