论文标题

可解释的深RDFS推理器

Explainable Deep RDFS Reasoner

论文作者

Makni, Bassem, Abdelaziz, Ibrahim, Hendler, James

论文摘要

旨在弥合RDF推理神经符号差距的最新研究工作证明,深度学习技术可用于学习RDFS推理规则。但是,与基于规则的推理者相比,它们的主要缺陷之一是缺乏推断的三元组(即用AI术语解释性)的推导。在本文中,我们以这些方法为基础,不仅提供了推论图,还可以解释如何推断这些三元组。在图词方法中,RDF图被表示为一系列图单词,可以通过神经机器翻译来实现推理。为了在RDFS推理中实现解释性,我们重新访问了这种方法,并引入了一种新的神经网络模型,该模型将输入图(作为一系列图单词)以及推断的三重词的编码并输出推断的三重序列。我们在两个数据集上评估了我们的理由模型:一个合成数据集 - LUBM基准测试和一个现实世界中的数据集 - 关于会议的Scholarlydata,其中最低验证精度接近96%。

Recent research efforts aiming to bridge the Neural-Symbolic gap for RDFS reasoning proved empirically that deep learning techniques can be used to learn RDFS inference rules. However, one of their main deficiencies compared to rule-based reasoners is the lack of derivations for the inferred triples (i.e. explainability in AI terms). In this paper, we build on these approaches to provide not only the inferred graph but also explain how these triples were inferred. In the graph words approach, RDF graphs are represented as a sequence of graph words where inference can be achieved through neural machine translation. To achieve explainability in RDFS reasoning, we revisit this approach and introduce a new neural network model that gets the input graph--as a sequence of graph words-- as well as the encoding of the inferred triple and outputs the derivation for the inferred triple. We evaluated our justification model on two datasets: a synthetic dataset-- LUBM benchmark-- and a real-world dataset --ScholarlyData about conferences-- where the lowest validation accuracy approached 96%.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源