论文标题

模型可解释性的人为因素:行业实践,挑战和需求

Human Factors in Model Interpretability: Industry Practices, Challenges, and Needs

论文作者

Hong, Sungsoo Ray, Hullman, Jessica, Bertini, Enrico

论文摘要

随着在产品开发和数据驱动的决策过程中使用机器学习(ML)模型在许多领域都普遍存在,人们致力于构建良好表现模型的关注越来越多地转移到了解其模型的工作原理上。尽管对HCI,ML等研究社区的学术兴趣迅速增长,但对从业者如何看待和旨在在其现有工作流程的背景下提供可解释性的知识知之甚少。对实践的理解缺乏理解可能会阻止可解释性研究解决重要的需求或导致不现实的解决方案。为了弥合这一差距,我们对行业从业人员进行了22次半结构化访谈,以了解他们在计划,构建和使用模型时如何构想和设计可解释性。基于对我们结果的定性分析,我们将大量利用ML模型的组织中存在的可解释性角色,流程,目标和策略区分开。从我们的分析中出现的可解释性工作的表征表明,模型的解释性经常涉及不同角色的人之间的合作和心理模型比较,通常旨在不仅在人与模型之间,而且在组织内人员之间建立信任。我们提出了对设计的含义,该设计讨论了从业者在实践中面临的可解释性挑战与文献中提出的方法之间的差距,并强调了可能更好地满足现实世界需求的可能的研究方向。

As the use of machine learning (ML) models in product development and data-driven decision-making processes became pervasive in many domains, people's focus on building a well-performing model has increasingly shifted to understanding how their model works. While scholarly interest in model interpretability has grown rapidly in research communities like HCI, ML, and beyond, little is known about how practitioners perceive and aim to provide interpretability in the context of their existing workflows. This lack of understanding of interpretability as practiced may prevent interpretability research from addressing important needs, or lead to unrealistic solutions. To bridge this gap, we conducted 22 semi-structured interviews with industry practitioners to understand how they conceive of and design for interpretability while they plan, build, and use their models. Based on a qualitative analysis of our results, we differentiate interpretability roles, processes, goals and strategies as they exist within organizations making heavy use of ML models. The characterization of interpretability work that emerges from our analysis suggests that model interpretability frequently involves cooperation and mental model comparison between people in different roles, often aimed at building trust not only between people and models but also between people within the organization. We present implications for design that discuss gaps between the interpretability challenges that practitioners face in their practice and approaches proposed in the literature, highlighting possible research directions that can better address real-world needs.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源