104
1

Toward Task Generalization via Memory Augmentation in Meta-Reinforcement Learning

Abstract

Agents trained via reinforcement learning (RL) often struggle to perform well on tasks that differ from those encountered during training. This limitation presents a challenge to the broader deployment of RL in diverse and dynamic task settings. In this work, we introduce memory augmentation, a memory-based RL approach to improve task generalization. Our approach leverages task-structured augmentations to simulate plausible out-of-distribution scenarios and incorporates memory mechanisms to enable context-aware policy adaptation. Trained on a predefined set of tasks, our policy demonstrates the ability to generalize to unseen tasks through memory augmentation without requiring additional interactions with the environment. Through extensive simulation experiments and real-world hardware evaluations on legged locomotion tasks, we demonstrate that our approach achieves zero-shot generalization to unseen tasks while maintaining robust in-distribution performance and high sample efficiency.

View on arXiv
@article{bao2025_2502.01521,
  title={ Toward Task Generalization via Memory Augmentation in Meta-Reinforcement Learning },
  author={ Kaixi Bao and Chenhao Li and Yarden As and Andreas Krause and Marco Hutter },
  journal={arXiv preprint arXiv:2502.01521},
  year={ 2025 }
}
Comments on this paper