300

Learning Heuristics over Large Graphs via Deep Reinforcement Learning

Abstract

Combinatorial optimization problems on graphs are routinely solved in various domains. Recently, it has been shown that heuristics for solving combinatorial problems can be learned using a machine learning-based approach. While existing techniques have primarily focussed on obtaining high-quality solutions, the aspect of scalability to billion-sized graphs has not been adequately addressed. In this paper, we propose a deep reinforcement learning framework called GCOMB to learn algorithms that can solve combinatorial problems over graphs at scale. Besides considering the traditional NP-hard combinatorial problems, we apply our framework to the popular and challenging data mining problem of Influence Maximization. GCOMB utilizes Graph Convolutional Network (GCN) to generate node embeddings that predict potential solution nodes. These embeddings are next fed to a Q-learning framework, which learns the combinatorial nature of the problem and predicts the final solution set. Through extensive evaluation on several synthetic and billion-sized real networks, we establish that GCOMB is more than 100 times faster than the state of the art while retaining the same quality of the solution sets.

View on arXiv
Comments on this paper