518

Learning from Human Directional Corrections

IEEE Transactions on robotics (IEEE Trans. Robot.), 2020
Abstract

This paper proposes an approach which enables a robot to learn a control objective function incrementally from human's directional corrections. Existing methods learn from human's magnitude corrections and require a human to carefully choose correction magnitudes, which otherwise can easily lead to over-correction and learning inefficiency. The proposed method only requires human's directional corrections -- corrections that only indicate the direction of a control change without indicating its magnitude -- applied at some time instances during the robot's motion. We only assume that each of human's corrections, regardless of its magnitude, points in a direction that improves the robot's current motion relative to an implicit control objective function. Thus, human's valid corrections always account for half of the correction space. The proposed method uses the direction of a correction to update the estimate of the objective function based on a cutting plane technique. We have established the theoretical results to show that this process guarantees the convergence of the learned objective function to the implicit one. The proposed approach has been examined by numerical examples, a user study on two human-robot games, and a real-world quadrotor experiment. The results confirm the convergence of the approach and show that the approach is significantly more effective (higher success rate), efficient/effortless (less human corrections needed), and accessible (fewer early wasted trials) than the state-of-the-art robot interactive learning schemes.

View on arXiv
Comments on this paper