485

Withdrawing the Edge-Read Permission of Graph Convolutional Network to Against Adversarial Attack

Abstract

Node classification based on graph convolutional networks (GCNs) is vulnerable to adversarial attacks by maliciously perturbing graph structures, such as inserting or deleting graph edges. In this paper, by formulating a general attack model, we demonstrate the vulnerability of GCNs, i.e., the edge-reading permission can easily create opportunities for adversarial attacks. To address this problem, we propose an anonymous graph convolutional network (AN-GCN), which allows classifying nodes without reading the edge information of GCNs. Extensive evaluations show that, the proposed AN-GCN can achieve outperformed high accuracy in node classification compared to general GCNs, while maintaining high-level security in defending against adversarial attacks.

View on arXiv
Comments on this paper