论文标题

可解释使用概率逻辑推理分类的AI

Explainable AI for Classification using Probabilistic Logic Inference

论文作者

Fan, Xiuyi, Liu, Siyuan, Henderson, Thomas C.

论文摘要

可解释的AI的总体目标是开发不仅表现出智能行为的系统,而且能够解释其基本原理并揭示见解。在可解释的机器学习中,产生高度预测准确性以及透明解释的方法很有价值。在这项工作中,我们提出了一种可解释的分类方法。我们的方法是通过首先从培训数据中构建符号知识基础的方法,然后通过线性编程对这种知识库进行概率推断。我们的方法达到了与传统分类器(例如随机森林,支持媒介机和神经网络)相当的学习表现。它确定了负责分类的决定性特征作为解释,并产生的结果与Shap(基于Shapley价值的方法的状态)相似。我们的算法在一系列合成和非合成数据集上表现良好。

The overarching goal of Explainable AI is to develop systems that not only exhibit intelligent behaviours, but also are able to explain their rationale and reveal insights. In explainable machine learning, methods that produce a high level of prediction accuracy as well as transparent explanations are valuable. In this work, we present an explainable classification method. Our method works by first constructing a symbolic Knowledge Base from the training data, and then performing probabilistic inferences on such Knowledge Base with linear programming. Our approach achieves a level of learning performance comparable to that of traditional classifiers such as random forests, support vector machines and neural networks. It identifies decisive features that are responsible for a classification as explanations and produces results similar to the ones found by SHAP, a state of the art Shapley Value based method. Our algorithms perform well on a range of synthetic and non-synthetic data sets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源