Being able to solve a task in diverse ways makes agents more robust to task variations and less prone to local optima. In this context, constrained diversity optimization has emerged as a powerful reinforcement learning (RL) framework to train a diverse set of agents in parallel. However, existing constrained-diversity RL methods often under-explore in complex tasks such as robotic manipulation, leading to a lack in policy diversity. To improve diversity optimization in RL, we therefore propose a curriculum that first explores at the trajectory level before learning step-based policies. In our empirical evaluation, we provide novel insights into the shortcoming of skill-based diversity optimization, and demonstrate empirically that our curriculum improves the diversity of the learned skills.
View on arXiv@article{braun2025_2506.01568, title={ Trajectory First: A Curriculum for Discovering Diverse Policies }, author={ Cornelius V. Braun and Sayantan Auddy and Marc Toussaint }, journal={arXiv preprint arXiv:2506.01568}, year={ 2025 } }