Hierarchical Subspaces of Policies for Continual Offline Reinforcement Learning

We consider a Continual Reinforcement Learning setup, where a learning agent must continuously adapt to new tasks while retaining previously acquired skill sets, with a focus on the challenge of avoiding forgetting past gathered knowledge and ensuring scalability with the growing number of tasks. Such issues prevail in autonomous robotics and video game simulations, notably for navigation tasks prone to topological or kinematic changes. To address these issues, we introduce HiSPO, a novel hierarchical framework designed specifically for continual learning in navigation settings from offline data. Our method leverages distinct policy subspaces of neural networks to enable flexible and efficient adaptation to new tasks while preserving existing knowledge. We demonstrate, through a careful experimental study, the effectiveness of our method in both classical MuJoCo maze environments and complex video game-like navigation simulations, showcasing competitive performances and satisfying adaptability with respect to classical continual learning metrics, in particular regarding the memory usage and efficiency.
View on arXiv@article{kobanda2025_2412.14865, title={ Hierarchical Subspaces of Policies for Continual Offline Reinforcement Learning }, author={ Anthony Kobanda and Rémy Portelas and Odalric-Ambrym Maillard and Ludovic Denoyer }, journal={arXiv preprint arXiv:2412.14865}, year={ 2025 } }