Optimizing 2D+1 Packing in Constrained Environments Using Deep Reinforcement Learning

This paper proposes a novel approach based on deep reinforcement learning (DRL) for the 2D+1 packing problem with spatial constraints. This problem is an extension of the traditional 2D packing problem, incorporating an additional constraint on the height dimension. Therefore, a simulator using the OpenAI Gym framework has been developed to efficiently simulate the packing of rectangular pieces onto two boards with height constraints. Furthermore, the simulator supports multidiscrete actions, enabling the selection of a position on either board and the type of piece to place. Finally, two DRL-based methods (Proximal Policy Optimization -- PPO and the Advantage Actor-Critic -- A2C) have been employed to learn a packing strategy and demonstrate its performance compared to a well-known heuristic baseline (MaxRect-BL). In the experiments carried out, the PPO-based approach proved to be a good solution for solving complex packaging problems and highlighted its potential to optimize resource utilization in various industrial applications, such as the manufacturing of aerospace composites.
View on arXiv@article{pugliese2025_2503.17573, title={ Optimizing 2D+1 Packing in Constrained Environments Using Deep Reinforcement Learning }, author={ Victor Ulisses Pugliese and Oséias F. de A. Ferreira and Fabio A. Faria }, journal={arXiv preprint arXiv:2503.17573}, year={ 2025 } }