论文标题
向非专家解释分类:当人们缺乏专业知识时,分类器的XAI用户研究
Explaining Classifications to Non Experts: An XAI User Study of Post Hoc Explanations for a Classifier When People Lack Expertise
论文作者
论文摘要
很少有可解释的AI(XAI)研究考虑用户对解释的理解如何根据他们是否或多或少地了解被解释的域(即他们的专业知识有所不同)而改变。但是,专业知识是大多数高风险,人类决策的关键方面(例如,了解学员的医生与经验丰富的顾问的不同)。因此,本文报告了一项新颖的用户研究(n = 96),介绍了一个域中的人民专业知识如何影响他们对事后解释的理解,例如深入学习黑匣子分类器。结果表明,当考虑到基于图像的域是熟悉的(即MNIST)时,人们对正确和错误分类的解释的理解在几个维度上发生了巨大变化(例如,响应时间,对正确性和帮助性的看法),与不熟悉(即Kannada Mnist)相对应(即MNIST)。讨论了这些新发现对XAI策略的更广泛含义。
Very few eXplainable AI (XAI) studies consider how users understanding of explanations might change depending on whether they know more or less about the to be explained domain (i.e., whether they differ in their expertise). Yet, expertise is a critical facet of most high stakes, human decision making (e.g., understanding how a trainee doctor differs from an experienced consultant). Accordingly, this paper reports a novel, user study (N=96) on how peoples expertise in a domain affects their understanding of post-hoc explanations by example for a deep-learning, black box classifier. The results show that peoples understanding of explanations for correct and incorrect classifications changes dramatically, on several dimensions (e.g., response times, perceptions of correctness and helpfulness), when the image-based domain considered is familiar (i.e., MNIST) as opposed to unfamiliar (i.e., Kannada MNIST). The wider implications of these new findings for XAI strategies are discussed.