27
7

Non-Gaussian SLAP: Simultaneous Localization and Planning Under Non-Gaussian Uncertainty in Static and Dynamic Environments

Abstract

Simultaneous Localization and Planning (SLAP) under process and measurement uncertainties is a challenge. It involves solving a stochastic control problem modeled as a Partially Observed Markov Decision Process (POMDP) in a general framework. For a convex environment, we propose an optimization-based open-loop optimal control problem coupled with receding horizon control strategy to plan for high quality trajectories along which the uncertainty of the state localization is reduced while the system reaches to a goal state with minimum control effort. In a static environment with non-convex state constraints, the optimization is modified by defining barrier functions to obtain collision-free paths while maintaining the previous goals. By initializing the optimization with trajectories in different homotopy classes and comparing the resultant costs, we improve the quality of the solution in the presence of action and measurement uncertainties. In dynamic environments with time-varying constraints such as moving obstacles or banned areas, the approach is extended to find collision-free trajectories. In this paper, the underlying spaces are continuous, and beliefs are non-Gaussian. Without obstacles, the optimization is a globally convex problem, while in the presence of obstacles it becomes locally convex. We demonstrate the performance of the method on different scenarios.

View on arXiv
Comments on this paper

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.