论文标题

解释的分类法,以支持逐个设计的解释性

A taxonomy of explanations to support Explainability-by-Design

论文作者

Tsakalakis, Niko, Stalla-Bourdillon, Sophie, Huynh, Trung Dong, Moreau, Luc

论文摘要

随着自动决策解决方案越来越多地应用于日常生活的各个方面,因此为各种利益相关者(即决策者,决策者,审计师,监管机构的接受者)产生有意义的解释的能力变得至关重要。在本文中,我们提出了解释的分类法,该解释是作为项目目的的整体“解释性划分”方法的一部分。该分类法的建立是为了为在组织层面设定的各种监管框架或政策所引起的广泛要求提供解释,以转化高级合规性要求或满足业务需求。分类法包括九个维度。它被用作被认为是侦探控制的解释的独立分类器,以帮助支持自动化的合规策略。通过轻度本体的形式提供了分类法的机制格式,并通过一系列示例证明了使用这种分类学开始解释性的逐个设计旅程的好处。

As automated decision-making solutions are increasingly applied to all aspects of everyday life, capabilities to generate meaningful explanations for a variety of stakeholders (i.e., decision-makers, recipients of decisions, auditors, regulators...) become crucial. In this paper, we present a taxonomy of explanations that was developed as part of a holistic 'Explainability-by-Design' approach for the purposes of the project PLEAD. The taxonomy was built with a view to produce explanations for a wide range of requirements stemming from a variety of regulatory frameworks or policies set at the organizational level either to translate high-level compliance requirements or to meet business needs. The taxonomy comprises nine dimensions. It is used as a stand-alone classifier of explanations conceived as detective controls, in order to aid supportive automated compliance strategies. A machinereadable format of the taxonomy is provided in the form of a light ontology and the benefits of starting the Explainability-by-Design journey with such a taxonomy are demonstrated through a series of examples.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源