LIVEJoin the current RTAI Connect sessionJoin now

52
75

Equivariant QQ Learning in Spatial Action Spaces

Abstract

Recently, a variety of new equivariant neural network model architectures have been proposed that generalize better over rotational and reflectional symmetries than standard models. These models are relevant to robotics because many robotics problems can be expressed in a rotationally symmetric way. This paper focuses on equivariance over a visual state space and a spatial action space -- the setting where the robot action space includes a subset of SE(2)\rm{SE}(2). In this situation, we know a priori that rotations and translations in the state image should result in the same rotations and translations in the spatial action dimensions of the optimal policy. Therefore, we can use equivariant model architectures to make QQ learning more sample efficient. This paper identifies when the optimal QQ function is equivariant and proposes QQ network architectures for this setting. We show experimentally that this approach outperforms standard methods in a set of challenging manipulation problems.

View on arXiv
Comments on this paper

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.