论文标题
对抗群体间链接注入降低图神经网络的公平性
Adversarial Inter-Group Link Injection Degrades the Fairness of Graph Neural Networks
论文作者
论文摘要
我们为旨在降低公平性的对抗神经网络(GNN)的对抗性攻击(GNN)的存在和有效性提供了证据。这些攻击可能不利,基于GNN的节点分类中的特定节点子组,其中基础网络的节点具有敏感的属性,例如种族或性别。我们进行了定性和实验分析,以解释对抗性链接注射如何损害GNN预测的公平性。例如,攻击者可以通过在属于相反子组和相反类标签的节点之间注入对抗性链接来损害基于GNN的节点分类的公平性。我们在经验数据集上的实验表明,对抗公平性攻击可以显着降低GNN预测的公平性(攻击是有效的),其扰动率较低(攻击是有效的),并且没有明显的准确性下降(攻击是欺骗性的)。这项工作证明了GNN模型对对抗公平性攻击的脆弱性。我们希望我们的发现在社区中提高人们对这个问题的认识,并为GNN模型的未来开发奠定了基础,这些模型对这种攻击更为强大。
We present evidence for the existence and effectiveness of adversarial attacks on graph neural networks (GNNs) that aim to degrade fairness. These attacks can disadvantage a particular subgroup of nodes in GNN-based node classification, where nodes of the underlying network have sensitive attributes, such as race or gender. We conduct qualitative and experimental analyses explaining how adversarial link injection impairs the fairness of GNN predictions. For example, an attacker can compromise the fairness of GNN-based node classification by injecting adversarial links between nodes belonging to opposite subgroups and opposite class labels. Our experiments on empirical datasets demonstrate that adversarial fairness attacks can significantly degrade the fairness of GNN predictions (attacks are effective) with a low perturbation rate (attacks are efficient) and without a significant drop in accuracy (attacks are deceptive). This work demonstrates the vulnerability of GNN models to adversarial fairness attacks. We hope our findings raise awareness about this issue in our community and lay a foundation for the future development of GNN models that are more robust to such attacks.