343

Adaptive Variants of Optimal Feedback Policies

Conference on Learning for Dynamics & Control (L4DC), 2021
Abstract

This paper presents a control-theoretic framework which stably combines optimal feedback policies with online learning for control of uncertain nonlinear systems. Given unknown parameters within a bounded range, the resulting adaptive control laws guarantee convergence of the closed-loop system to the state of zero cost. The proposed framework is able to employ the certainty equivalence principle when designing optimal policies and value functions by online adjustment of the learning rate - a mechanism needed to guarantee stable learning and control. The approach is demonstrated on the familiar mountain car problem, where it is shown to yield near-optimal behavior despite the presence of parametric uncertainty.

View on arXiv
Comments on this paper