论文标题
减轻联邦学习的偏见
Mitigating Bias in Federated Learning
论文作者
论文摘要
随着创建歧视感知模型的方法,它们专注于集中式ML,使联合学习(FL)未经探索。 FL是协作ML的一种上升方法,在该方法中,聚合者在不共享其培训数据的情况下策划多个方面训练全球模型。在本文中,我们讨论了FL中偏见的原因,并提出了三种预处理和加工方法,以减轻偏见,而不会损害数据隐私,这是关键的FL要求。由于当事方之间的数据异质性是FL的挑战性特征之一,因此我们对几个数据分布进行实验,以分析其对模型性能,公平度量指标和偏见学习模式的影响。我们对我们提出的技术进行了全面的分析,结果表明,即使当事方偏向数据分布或只有20%的当事方采用这些方法,这些方法也是有效的。
As methods to create discrimination-aware models develop, they focus on centralized ML, leaving federated learning (FL) unexplored. FL is a rising approach for collaborative ML, in which an aggregator orchestrates multiple parties to train a global model without sharing their training data. In this paper, we discuss causes of bias in FL and propose three pre-processing and in-processing methods to mitigate bias, without compromising data privacy, a key FL requirement. As data heterogeneity among parties is one of the challenging characteristics of FL, we conduct experiments over several data distributions to analyze their effects on model performance, fairness metrics, and bias learning patterns. We conduct a comprehensive analysis of our proposed techniques, the results demonstrating that these methods are effective even when parties have skewed data distributions or as little as 20% of parties employ the methods.