论文标题

DT+GNN:使用决策树的完全可以解释的图神经网络

DT+GNN: A Fully Explainable Graph Neural Network using Decision Trees

论文作者

Müller, Peter, Faber, Lukas, Martinkus, Karolis, Wattenhofer, Roger

论文摘要

我们提出了完全可以解释的决策树图神经网络(DT+GNN)体系结构。与现有的Black-Box GNN和事后解释方法相反,可以在每个步骤中检查DT+GNN的推理。为了实现这一目标,我们首先构建了一个可区分的GNN层,该层使用用于节点和消息的分类状态空间。这使我们能够将GNN中训练有素的MLP转换为决策树。这些树木是使用我们新提出的方法修剪的,以确保它们较小且易于解释。我们还可以使用决策树来计算传统的解释。我们在现实世界中的数据集和合成GNN的解释性基准上都证明了这种体系结构以及传统的GNN。此外,我们利用DT+GNNS的解释性为许多数据集找到有趣的见解,结果令人惊讶。我们还提供了一个交互式Web工具来检查DT+GNN的决策。

We propose the fully explainable Decision Tree Graph Neural Network (DT+GNN) architecture. In contrast to existing black-box GNNs and post-hoc explanation methods, the reasoning of DT+GNN can be inspected at every step. To achieve this, we first construct a differentiable GNN layer, which uses a categorical state space for nodes and messages. This allows us to convert the trained MLPs in the GNN into decision trees. These trees are pruned using our newly proposed method to ensure they are small and easy to interpret. We can also use the decision trees to compute traditional explanations. We demonstrate on both real-world datasets and synthetic GNN explainability benchmarks that this architecture works as well as traditional GNNs. Furthermore, we leverage the explainability of DT+GNNs to find interesting insights into many of these datasets, with some surprising results. We also provide an interactive web tool to inspect DT+GNN's decision making.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源