论文标题
整体超过其部分吗? AI解释对互补团队绩效的影响
Does the Whole Exceed its Parts? The Effect of AI Explanations on Complementary Team Performance
论文作者
论文摘要
许多研究人员通过研究表明,当AI解释其建议时,人类AI团队在决策任务方面的表现有所改善。但是,先前的研究仅在AI(仅AI)表现优于人类和最佳团队时就会观察到改善。解释可以帮助导致互补的表现,在这种情况下,团队的准确性高于人类或人工智能独奏?我们在三个数据集上进行了混合方法用户研究,在该数据集上,具有与人类相当的准确性的AI可以帮助参与者解决一项任务(在某些情况下解释自己)。尽管我们观察到AI增强的互补改进,但解释并没有增加它们。相反,解释增加了人类接受AI建议的机会,无论其正确性如何。我们的结果对以人为本的AI提出了新的挑战:我们可以开发鼓励对AI的适当信任的解释性方法,因此有助于产生(或改善)补充绩效?
Many researchers motivate explainable AI with studies showing that human-AI team performance on decision-making tasks improves when the AI explains its recommendations. However, prior studies observed improvements from explanations only when the AI, alone, outperformed both the human and the best team. Can explanations help lead to complementary performance, where team accuracy is higher than either the human or the AI working solo? We conduct mixed-method user studies on three datasets, where an AI with accuracy comparable to humans helps participants solve a task (explaining itself in some conditions). While we observed complementary improvements from AI augmentation, they were not increased by explanations. Rather, explanations increased the chance that humans will accept the AI's recommendation, regardless of its correctness. Our result poses new challenges for human-centered AI: Can we develop explanatory approaches that encourage appropriate trust in AI, and therefore help generate (or improve) complementary performance?