Deep reinforcement learning (DRL) has been widely used for dynamic algorithm configuration, particularly in evolutionary computation, which benefits from the adaptive update of parameters during the algorithmic execution. However, applying DRL to algorithm configuration for multi-objective combinatorial optimization (MOCO) problems remains relatively unexplored. This paper presents a novel graph neural network (GNN) based DRL to configure multi-objective evolutionary algorithms. We model the dynamic algorithm configuration as a Markov decision process, representing the convergence of solutions in the objective space by a graph, with their embeddings learned by a GNN to enhance the state representation. Experiments on diverse MOCO challenges indicate that our method outperforms traditional and DRL-based algorithm configuration methods in terms of efficacy and adaptability. It also exhibits advantageous generalizability across objective types and problem sizes, and applicability to different evolutionary computation methods.
View on arXiv@article{reijnen2025_2505.16471, title={ Graph-Supported Dynamic Algorithm Configuration for Multi-Objective Combinatorial Optimization }, author={ Robbert Reijnen and Yaoxin Wu and Zaharah Bukhsh and Yingqian Zhang }, journal={arXiv preprint arXiv:2505.16471}, year={ 2025 } }