The Impact of On-Policy Parallelized Data Collection on Deep Reinforcement Learning Networks

The use of parallel actors for data collection has been an effective technique used in reinforcement learning (RL) algorithms. The manner in which data is collected in these algorithms, controlled via the number of parallel environments and the rollout length, induces a form of bias-variance trade-off; the number of training passes over the collected data, on the other hand, must strike a balance between sample efficiency and overfitting. We conduct an empirical analysis of these trade-offs on PPO, one of the most popular RL algorithms that uses parallel actors, and establish connections to network plasticity and, more generally, optimization stability. We examine its impact on network architectures, as well as the hyper-parameter sensitivity when scaling data. Our analyses indicate that larger dataset sizes can increase final performance across a variety of settings, and that scaling parallel environments is more effective than increasing rollout lengths. These findings highlight the critical role of data collection strategies in improving agent performance.
View on arXiv@article{mayor2025_2506.03404, title={ The Impact of On-Policy Parallelized Data Collection on Deep Reinforcement Learning Networks }, author={ Walter Mayor and Johan Obando-Ceron and Aaron Courville and Pablo Samuel Castro }, journal={arXiv preprint arXiv:2506.03404}, year={ 2025 } }