论文标题
良好的反事实和在哪里找到它们:一种基于案例的技术,用于生成可解释的AI(XAI)的反事实
Good Counterfactuals and Where to Find Them: A Case-Based Technique for Generating Counterfactuals for Explainable AI (XAI)
论文作者
论文摘要
最近,一系列研究确定了反事实解释的使用是解决可解释的AI(XAI)问题的潜在重要解决方案。有人认为,(a)从技术上讲,这些反事实案件可以通过置换问题功能来产生,直到找到班级变化为止,(b)从心理上讲,它们在因果关系上比事实解释更具因果关系,(c)法律上,它们是GDPR符合的。但是,使用当前技术(例如稀疏性和合理性)发现良好的反事实存在问题。我们表明,许多常用数据集似乎很少有出色的反事实来解释。因此,我们提出了一种基于新的案例方法,使用有关案例库的反事实潜力和解释性覆盖的新思想来产生反事实。新技术重用良好的反事实的模式,存在于案例基础中,以产生类似的反事实,可以解释新问题及其解决方案。几项实验表明,该技术如何改善以前发现想要的病例基础的反事实潜力和解释性覆盖范围。
Recently, a groundswell of research has identified the use of counterfactual explanations as a potentially significant solution to the Explainable AI (XAI) problem. It is argued that (a) technically, these counterfactual cases can be generated by permuting problem-features until a class change is found, (b) psychologically, they are much more causally informative than factual explanations, (c) legally, they are GDPR-compliant. However, there are issues around the finding of good counterfactuals using current techniques (e.g. sparsity and plausibility). We show that many commonly-used datasets appear to have few good counterfactuals for explanation purposes. So, we propose a new case based approach for generating counterfactuals using novel ideas about the counterfactual potential and explanatory coverage of a case-base. The new technique reuses patterns of good counterfactuals, present in a case-base, to generate analogous counterfactuals that can explain new problems and their solutions. Several experiments show how this technique can improve the counterfactual potential and explanatory coverage of case-bases that were previously found wanting.