论文标题
EDOG:图形神经网络的对抗边缘检测
EDoG: Adversarial Edge Detection For Graph Neural Networks
论文作者
论文摘要
图神经网络(GNN)已广泛应用于不同任务,例如生物信息学,药物设计和社交网络。但是,最近的研究表明,GNN容易受到对抗性攻击的影响,旨在通过增加微妙的扰动来误导节点或子图分类预测。由于扰动的幅度很小,图形数据的离散性质,检测这些攻击是具有挑战性的。在本文中,我们提出了一个一般的对抗边缘检测管道EDOG,而无需基于图生成的攻击策略了解。具体而言,我们提出了一种新型的图生成方法,并结合了链接预测,以检测可疑的对抗边缘。为了有效地训练图形生成模型,我们从给定的图数据中采样了几个子图。我们表明,由于对抗边缘的数量通常在实践中较少,因此概率较低,因此采样子图将基于联合界限包含对抗边缘。此外,考虑到扰乱大量边缘的强烈攻击,我们提出了一组新型功能,以执行异常检测作为我们检测的预处理。在三个现实世界图数据集上进行了广泛的实验结果,包括来自一家主要公司的私人交易规则数据集和具有控制属性的两种类型的合成图表,表明EDOG可以在不需要任何有关攻击类型的情况下实现四种未见的未见攻击策略,而无需任何知识就可以实现0.8 AUC。大约有0.85,了解攻击类型。 EDOG明显优于传统的恶意边缘检测基线。我们还表明,对我们的检测管道充分了解的自适应攻击很难绕过它。
Graph Neural Networks (GNNs) have been widely applied to different tasks such as bioinformatics, drug design, and social networks. However, recent studies have shown that GNNs are vulnerable to adversarial attacks which aim to mislead the node or subgraph classification prediction by adding subtle perturbations. Detecting these attacks is challenging due to the small magnitude of perturbation and the discrete nature of graph data. In this paper, we propose a general adversarial edge detection pipeline EDoG without requiring knowledge of the attack strategies based on graph generation. Specifically, we propose a novel graph generation approach combined with link prediction to detect suspicious adversarial edges. To effectively train the graph generative model, we sample several sub-graphs from the given graph data. We show that since the number of adversarial edges is usually low in practice, with low probability the sampled sub-graphs will contain adversarial edges based on the union bound. In addition, considering the strong attacks which perturb a large number of edges, we propose a set of novel features to perform outlier detection as the preprocessing for our detection. Extensive experimental results on three real-world graph datasets including a private transaction rule dataset from a major company and two types of synthetic graphs with controlled properties show that EDoG can achieve above 0.8 AUC against four state-of-the-art unseen attack strategies without requiring any knowledge about the attack type; and around 0.85 with knowledge of the attack type. EDoG significantly outperforms traditional malicious edge detection baselines. We also show that an adaptive attack with full knowledge of our detection pipeline is difficult to bypass it.