论文标题
反事实公平基本上是人口统计学
Counterfactual Fairness Is Basically Demographic Parity
论文作者
论文摘要
做出公正的决定对于在社会环境中实施机器学习算法至关重要。在这项工作中,我们考虑了反事实公平的著名定义[Kusner等,Neurips,2017]。首先,我们证明一种满足反事实公平的算法也满足人口统计学的偏见,这是一个更简单的公平约束。同样,我们表明所有满足人口统计学奇偶校验的算法都可以进行琐碎的修改以满足反事实公平。总之,我们的结果表明,反事实公平基本上等同于人口统计学,这对不断增长的反事实公平工作具有重要意义。然后,我们从经验上验证了理论发现,分析了三种现有的算法,以针对三个简单的基准分析反事实公平。我们发现,在几个数据集上,两种简单的基准算法在公平,准确性和效率方面都优于所有三种现有算法。我们的分析使我们实现了一个具体的公平目标:保留受保护群体中个人的顺序。我们认为,围绕个人在受保护群体中的秩序的透明度使公平的算法更加值得信赖。根据设计,两种简单的基准算法满足了这个目标,而现有的反事实公平算法则不能。
Making fair decisions is crucial to ethically implementing machine learning algorithms in social settings. In this work, we consider the celebrated definition of counterfactual fairness [Kusner et al., NeurIPS, 2017]. We begin by showing that an algorithm which satisfies counterfactual fairness also satisfies demographic parity, a far simpler fairness constraint. Similarly, we show that all algorithms satisfying demographic parity can be trivially modified to satisfy counterfactual fairness. Together, our results indicate that counterfactual fairness is basically equivalent to demographic parity, which has important implications for the growing body of work on counterfactual fairness. We then validate our theoretical findings empirically, analyzing three existing algorithms for counterfactual fairness against three simple benchmarks. We find that two simple benchmark algorithms outperform all three existing algorithms -- in terms of fairness, accuracy, and efficiency -- on several data sets. Our analysis leads us to formalize a concrete fairness goal: to preserve the order of individuals within protected groups. We believe transparency around the ordering of individuals within protected groups makes fair algorithms more trustworthy. By design, the two simple benchmark algorithms satisfy this goal while the existing algorithms for counterfactual fairness do not.