Unlearnable Graph: Protecting Graphs from Unauthorized Exploitation

Abstract
While the use of graph-structured data in various fields is becoming increasingly popular, it also raises concerns about the potential unauthorized exploitation of personal data for training commercial graph neural network (GNN) models, which can compromise privacy. To address this issue, we propose a novel method for generating unlearnable graph examples. By injecting delusive but imperceptible noise into graphs using our Error-Minimizing Structural Poisoning (EMinS) module, we are able to make the graphs unexploitable. Notably, by modifying only at most of the potential edges in the graph data, our method successfully decreases the accuracy from to on the COLLAB dataset.
View on arXivComments on this paper