论文标题

探索机器学习从业人员(尝试)如何使用公平工具包

Exploring How Machine Learning Practitioners (Try To) Use Fairness Toolkits

论文作者

Deng, Wesley Hanwen, Nagireddy, Manish, Lee, Michelle Seng Ah, Singh, Jatinder, Wu, Zhiwei Steven, Holstein, Kenneth, Zhu, Haiyi

论文摘要

近年来,许多开源ML公平工具包的发展旨在帮助ML从业者评估和解决其系统中的不公平性。但是,很少有研究研究ML从业人员实际上如何在实践中使用这些工具包。在本文中,我们对行业从业人员(尝试)如何与现有公平工具包合作的第一次深入经验探索。特别是,我们进行了思维访谈,以了解参与者如何学习和使用公平工具包,并通过匿名在线调查探讨了我们发现的一般性。我们确定了一些公平工具包的机会,可以更好地满足从业人员的需求,并在有效,负责任地使用工具包中踩踏他们。基于这些发现,我们强调了对未来开源公平工具包的设计的影响,这些工具包可以支持从业者更好地围绕ML公平努力进行上下文,沟通和协作。

Recent years have seen the development of many open-source ML fairness toolkits aimed at helping ML practitioners assess and address unfairness in their systems. However, there has been little research investigating how ML practitioners actually use these toolkits in practice. In this paper, we conducted the first in-depth empirical exploration of how industry practitioners (try to) work with existing fairness toolkits. In particular, we conducted think-aloud interviews to understand how participants learn about and use fairness toolkits, and explored the generality of our findings through an anonymous online survey. We identified several opportunities for fairness toolkits to better address practitioner needs and scaffold them in using toolkits effectively and responsibly. Based on these findings, we highlight implications for the design of future open-source fairness toolkits that can support practitioners in better contextualizing, communicating, and collaborating around ML fairness efforts.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源