论文标题

“解释”不是技术术语:XAI歧义的问题

"Explanation" is Not a Technical Term: The Problem of Ambiguity in XAI

论文作者

Gilpin, Leilani H., Paley, Andrew R., Alam, Mohammed A., Spurlock, Sarah, Hammond, Kristian J.

论文摘要

人们普遍认为,人工智能(AI)系统,尤其是使用机器学习(ML)的系统,应该能够“解释”其行为。不幸的是,关于什么构成“解释”几乎没有共识。这引起了系统为可解释的人工智能(XAI)提供的解释与用户和其他受众真正需要的解释之间的解释之间的脱节,这些解释应由功能角色,受众和能力的全部范围来定义。在本文中,我们探讨了解释的特征以及如何使用这些功能评估其效用。我们专注于根据其功能角色所定义的解释的要求,试图理解它们的用户的知识状态以及生成它们所需的信息的可用性。此外,我们讨论了XAI在不建立其信任度的情况下实现信任的风险,并为XAI领域的下一步定义了建立指标以指导和基础系统生成的解释的实用性的关键下一步。

There is broad agreement that Artificial Intelligence (AI) systems, particularly those using Machine Learning (ML), should be able to "explain" their behavior. Unfortunately, there is little agreement as to what constitutes an "explanation." This has caused a disconnect between the explanations that systems produce in service of explainable Artificial Intelligence (XAI) and those explanations that users and other audiences actually need, which should be defined by the full spectrum of functional roles, audiences, and capabilities for explanation. In this paper, we explore the features of explanations and how to use those features in evaluating their utility. We focus on the requirements for explanations defined by their functional role, the knowledge states of users who are trying to understand them, and the availability of the information needed to generate them. Further, we discuss the risk of XAI enabling trust in systems without establishing their trustworthiness and define a critical next step for the field of XAI to establish metrics to guide and ground the utility of system-generated explanations.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源