论文标题

“冷酷,技术决策者”:AI可以提供解释性,可谈判性和人性吗?

"A cold, technical decision-maker": Can AI provide explainability, negotiability, and humanity?

论文作者

Woodruff, Allison, Anderson, Yasmin Asare, Armstrong, Katherine Jameson, Gkiza, Marina, Jennings, Jay, Moessner, Christopher, Viegas, Fernanda, Wattenberg, Martin, Webb, and Lynette, Wrede, Fabian, Kelley, Patrick Gage

论文摘要

越来越多地部署了算法系统,以在人们生活的许多领域做出决定。从人类到算法决策的转变伴随着对可能与社会价值不符的潜在不透明决策以及提议的补救措施(例如解释性)的关注。我们介绍了一项关于算法决策制定的定性研究的结果,该研究由五个讲习班组成,共有60名参与者在芬兰,德国,英国和美国。我们邀请参与者推理决策质量,例如各种领域的解释性和准确性。参与者将AI视为遵循严格标准并很好地执行机械任务的决策者,但在很大程度上无法进行主观或道德复杂的判断。我们讨论了参与者在决策中对人类的考虑,并介绍了“可谈判性”,超越正式标准并在系统周围灵活工作的能力的概念。

Algorithmic systems are increasingly deployed to make decisions in many areas of people's lives. The shift from human to algorithmic decision-making has been accompanied by concern about potentially opaque decisions that are not aligned with social values, as well as proposed remedies such as explainability. We present results of a qualitative study of algorithmic decision-making, comprised of five workshops conducted with a total of 60 participants in Finland, Germany, the United Kingdom, and the United States. We invited participants to reason about decision-making qualities such as explainability and accuracy in a variety of domains. Participants viewed AI as a decision-maker that follows rigid criteria and performs mechanical tasks well, but is largely incapable of subjective or morally complex judgments. We discuss participants' consideration of humanity in decision-making, and introduce the concept of 'negotiability,' the ability to go beyond formal criteria and work flexibly around the system.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源