论文标题

从原始数据中绑架知识的知识

Abductive Knowledge Induction From Raw Data

论文作者

Dai, Wang-Zhou, Muggleton, Stephen H.

论文摘要

对于许多涉及原始输入的重重推理任务,设计适当的端到端学习管道是一项挑战。神经符号学习,将过程分为亚符号的感知和象征性推理,试图同时利用数据驱动的机器学习和知识驱动的推理。但是,它们遭受了这两个组件之间界面内的指数计算复杂性,其中亚符号学习模型缺乏直接监督,符号模型缺乏准确的输入事实。因此,他们中的大多数都假定存在强大的象征知识基础,并且只学习感知模型的同时避免了关键问题:知识来自何处?在本文中,我们提出了绑架性的元解释学习($ meta_ {abd} $),该学习将绑架和归纳统一以学习神经网络并从原始数据中共同诱导逻辑理论。实验结果表明,$ meta_ {abd} $不仅在预测准确性和数据效率方面优于比较系统,而且还诱导逻辑程序,这些程序可以在随后的学习任务中重新使用为背景知识。据我们所知,$ meta_ {abd} $是第一个可以从头开始共同学习神经网络的系统,并通过谓词发明诱导递归的一阶逻辑理论。

For many reasoning-heavy tasks involving raw inputs, it is challenging to design an appropriate end-to-end learning pipeline. Neuro-Symbolic Learning, divide the process into sub-symbolic perception and symbolic reasoning, trying to utilise data-driven machine learning and knowledge-driven reasoning simultaneously. However, they suffer from the exponential computational complexity within the interface between these two components, where the sub-symbolic learning model lacks direct supervision, and the symbolic model lacks accurate input facts. Hence, most of them assume the existence of a strong symbolic knowledge base and only learn the perception model while avoiding a crucial problem: where does the knowledge come from? In this paper, we present Abductive Meta-Interpretive Learning ($Meta_{Abd}$) that unites abduction and induction to learn neural networks and induce logic theories jointly from raw data. Experimental results demonstrate that $Meta_{Abd}$ not only outperforms the compared systems in predictive accuracy and data efficiency but also induces logic programs that can be re-used as background knowledge in subsequent learning tasks. To the best of our knowledge, $Meta_{Abd}$ is the first system that can jointly learn neural networks from scratch and induce recursive first-order logic theories with predicate invention.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源