Model Predictive Path Integral (MPPI) control, Reinforcement Learning (RL), and Diffusion Models have each demonstrated strong performance in trajectory optimization, decision-making, and motion planning. However, these approaches have traditionally been treated as distinct methodologies with separate optimization frameworks. In this work, we establish a unified perspective that connects MPPI, RL, and Diffusion Models through gradient-based optimization on the Gibbs measure. We first show that MPPI can be interpreted as performing gradient ascent on a smoothed energy function. We then demonstrate that Policy Gradient methods reduce to MPPI by applying an exponential transformation to the objective function. Additionally, we establish that the reverse sampling process in diffusion models follows the same update rule as MPPI.
View on arXiv@article{li2025_2502.20476, title={ Unifying Model Predictive Path Integral Control, Reinforcement Learning, and Diffusion Models for Optimal Control and Planning }, author={ Yankai Li and Mo Chen }, journal={arXiv preprint arXiv:2502.20476}, year={ 2025 } }