论文标题

软件公平:分析和调查

Software Fairness: An Analysis and Survey

论文作者

Soremekun, Ezekiel, Papadakis, Mike, Cordy, Maxime, Traon, Yves Le

论文摘要

在过去的十年中,研究人员将公平性作为软件属性学习。特别是如何设计公平软件系统?这包括指定,设计和验证公平属性。但是,解决偏见作为软件工程问题的作品的景观尚不清楚,即分析基于学习的软件的公平性能的技术和研究。在这项工作中,我们清楚地了解了软件公平分析中最新的。为此,我们对164个出版物进行了深入分析,对研究基于学习的软件系统的公平性进行深入分析。具体而言,我们研究了评估的公平度量,所研究的任务,公平分析的类型,所提出方法的主要思想以及访问水平(例如黑色,白色或灰色框)。我们的发现包括以下内容:(1)公平问题(例如公平规范和要求工程)的研究不足; (2)诸如条件,顺序和交叉公平之类的公平措施不足; (3)几乎没有研究非结构化数据集(例如音频,图像和文本)以进行公平分析; (4)软件公平分析技术几乎不采用白色框,加工机器学习(ML)分析方法。总而言之,我们观察到了一些公开挑战,包括需要研究交叉/顺序偏见,基于政策的偏见处理以及社会技术偏见缓解的人类。

In the last decade, researchers have studied fairness as a software property. In particular, how to engineer fair software systems? This includes specifying, designing, and validating fairness properties. However, the landscape of works addressing bias as a software engineering concern is unclear, i.e., techniques and studies that analyze the fairness properties of learning-based software. In this work, we provide a clear view of the state-of-the-art in software fairness analysis. To this end, we collect, categorize and conduct an in-depth analysis of 164 publications investigating the fairness of learning-based software systems. Specifically, we study the evaluated fairness measure, the studied tasks, the type of fairness analysis, the main idea of the proposed approaches, and the access level (e.g., black, white, or grey box). Our findings include the following: (1) Fairness concerns (such as fairness specification and requirements engineering) are under-studied; (2) Fairness measures such as conditional, sequential, and intersectional fairness are under-explored; (3) Unstructured datasets (e.g., audio, image, and text) are barely studied for fairness analysis; and (4) Software fairness analysis techniques hardly employ white-box, in-processing machine learning (ML) analysis methods. In summary, we observed several open challenges including the need to study intersectional/sequential bias, policy-based bias handling, and human-in-the-loop, socio-technical bias mitigation.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源