In the zero-shot policy transfer setting in reinforcement learning, the goal is to train an agent on a fixed set of training environments so that it can generalise to similar, but unseen, testing environments. Previous work has shown that policy distillation after training can sometimes produce a policy that outperforms the original in the testing environments. However, it is not yet entirely clear why that is, or what data should be used to distil the policy. In this paper, we prove, under certain assumptions, a generalisation bound for policy distillation after training. The theory provides two practical insights: for improved generalisation, you should 1) train an ensemble of distilled policies, and 2) distil it on as much data from the training environments as possible. We empirically verify that these insights hold in more general settings, when the assumptions required for the theory no longer hold. Finally, we demonstrate that an ensemble of policies distilled on a diverse dataset can generalise significantly better than the original agent.
View on arXiv@article{weltevrede2025_2505.16581, title={ How Ensembles of Distilled Policies Improve Generalisation in Reinforcement Learning }, author={ Max Weltevrede and Moritz A. Zanger and Matthijs T.J. Spaan and Wendelin Böhmer }, journal={arXiv preprint arXiv:2505.16581}, year={ 2025 } }