论文标题

解释性案例研究

Explainability Case Studies

论文作者

Zevenbergen, Ben, Woodruff, Allison, Kelley, Patrick Gage

论文摘要

解释性是AI系统设计中的关键道德概念之一。但是,迄今为止,尝试操作这一概念的尝试倾向于将重点放在诸如模型可解释性的新软件或清单的指南之类的方法上。现有的工具和指导很少会激发AI系统的设计人员,以批判性地思考解释在其系统中的作用。我们介绍了一组假设AI-ai-ai-ai-ai-ai-ai-ai-ai-ai-ai-ai-ai-ai-ai-ai-ai-ai-a-ai-ai-ai-ai-ai-ai-ai-ai-ai-ai-ai-a-ai-a-a-a-a-a-a-n案构成案例,旨在为产品设计师,开发人员,学生和教育者提供能力,以为自己的产品制定整体解释性策略。

Explainability is one of the key ethical concepts in the design of AI systems. However, attempts to operationalize this concept thus far have tended to focus on approaches such as new software for model interpretability or guidelines with checklists. Rarely do existing tools and guidance incentivize the designers of AI systems to think critically and strategically about the role of explanations in their systems. We present a set of case studies of a hypothetical AI-enabled product, which serves as a pedagogical tool to empower product designers, developers, students, and educators to develop a holistic explainability strategy for their own products.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源