31
0

Generalizable Motion Planning via Operator Learning

Abstract

In this work, we introduce a planning neural operator (PNO) for predicting the value function of a motion planning problem. We recast value function approximation as learning a single operator from the cost function space to the value function space, which is defined by an Eikonal partial differential equation (PDE). Therefore, our PNO model, despite being trained with a finite number of samples at coarse resolution, inherits the zero-shot super-resolution property of neural operators. We demonstrate accurate value function approximation at 16×16\times the training resolution on the MovingAI lab's 2D city dataset, compare with state-of-the-art neural value function predictors on 3D scenes from the iGibson building dataset and showcase optimal planning with 4-DOF robotic manipulators. Lastly, we investigate employing the value function output of PNO as a heuristic function to accelerate motion planning. We show theoretically that the PNO heuristic is ϵ\epsilon-consistent by introducing an inductive bias layer that guarantees our value functions satisfy the triangle inequality. With our heuristic, we achieve a 30%30\% decrease in nodes visited while obtaining near optimal path lengths on the MovingAI lab 2D city dataset, compared to classical planning methods (AA^\ast, RRTRRT^\ast).

View on arXiv
@article{matada2025_2410.17547,
  title={ Generalizable Motion Planning via Operator Learning },
  author={ Sharath Matada and Luke Bhan and Yuanyuan Shi and Nikolay Atanasov },
  journal={arXiv preprint arXiv:2410.17547},
  year={ 2025 }
}
Comments on this paper