论文标题
在算法公平性中建模数据丢失的重要性:因果观点
The Importance of Modeling Data Missingness in Algorithmic Fairness: A Causal Perspective
论文作者
论文摘要
用于机器学习的培训数据集通常具有某种形式的缺失。例如,要学习一个模型来决定贷款,可用的培训数据包括过去获得贷款的个人,但不是那些没有贷款的人。如果被忽略,这种失踪性会在部署模型时取消对培训程序的任何公平保证。使用因果图,我们表征了不同现实世界中的缺失机制。我们显示了在流行公平算法中使用的各种分布可以从培训数据中恢复的条件。我们的理论结果表明,这些算法中的许多都不能保证实践中的公平性。建模缺失还有助于确定公平算法的正确设计原理。例如,在多个筛选回合中做出决策的多阶段设置中,我们使用框架来得出设计公平算法所需的最小分布。我们提出的算法分散了决策过程,并且仍然达到与需要集中分布和不可恢复分布的最佳算法相似的性能。
Training datasets for machine learning often have some form of missingness. For example, to learn a model for deciding whom to give a loan, the available training data includes individuals who were given a loan in the past, but not those who were not. This missingness, if ignored, nullifies any fairness guarantee of the training procedure when the model is deployed. Using causal graphs, we characterize the missingness mechanisms in different real-world scenarios. We show conditions under which various distributions, used in popular fairness algorithms, can or can not be recovered from the training data. Our theoretical results imply that many of these algorithms can not guarantee fairness in practice. Modeling missingness also helps to identify correct design principles for fair algorithms. For example, in multi-stage settings where decisions are made in multiple screening rounds, we use our framework to derive the minimal distributions required to design a fair algorithm. Our proposed algorithm decentralizes the decision-making process and still achieves similar performance to the optimal algorithm that requires centralization and non-recoverable distributions.