Discovering Temporal Structure: An Overview of Hierarchical Reinforcement Learning
- OffRL

Developing agents capable of exploring, planning and learning in complex open-ended environments is a grand challenge in artificial intelligence (AI). Hierarchical reinforcement learning (HRL) offers a promising solution to this challenge by discovering and exploiting the temporal structure within a stream of experience. The strong appeal of the HRL framework has led to a rich and diverse body of literature attempting to discover a useful structure. However, it is still not clear how one might define what constitutes good structure in the first place, or the kind of problems in which identifying it may be helpful. This work aims to identify the benefits of HRL from the perspective of the fundamental challenges in decision-making, as well as highlight its impact on the performance trade-offs of AI agents. Through these benefits, we then cover the families of methods that discover temporal structure in HRL, ranging from learning directly from online experience to offline datasets, to leveraging large language models (LLMs). Finally, we highlight the challenges of temporal structure discovery and the domains that are particularly well-suited for such endeavours.
View on arXiv@article{klissarov2025_2506.14045, title={ Discovering Temporal Structure: An Overview of Hierarchical Reinforcement Learning }, author={ Martin Klissarov and Akhil Bagaria and Ziyan Luo and George Konidaris and Doina Precup and Marlos C. Machado }, journal={arXiv preprint arXiv:2506.14045}, year={ 2025 } }