论文标题
增强视觉属性预测变量的公平性
Enhancing Fairness of Visual Attribute Predictors
论文作者
论文摘要
深度神经网络用于图像识别任务(例如预测笑脸)的性能会随着代表性不足的敏感属性类别而降低。我们通过基于人口统计学奇偶校验,均等赔率和新型的联合会措施的批估计估计来引入公平意识的正规化损失来解决这个问题。对Celeba,Utkface和SIIM-ISIC黑色素瘤分类挑战的面部和医学图像进行的实验表明,我们提出的公平性损失对缓解偏见的有效性,因为它们可以改善模型公平性,同时保持高分类性能。据我们所知,我们的工作是首次尝试将这些类型的损失纳入端到端培训方案,以减轻视觉属性预测因子的偏见。我们的代码可在https://github.com/nish03/fvap上找到。
The performance of deep neural networks for image recognition tasks such as predicting a smiling face is known to degrade with under-represented classes of sensitive attributes. We address this problem by introducing fairness-aware regularization losses based on batch estimates of Demographic Parity, Equalized Odds, and a novel Intersection-over-Union measure. The experiments performed on facial and medical images from CelebA, UTKFace, and the SIIM-ISIC melanoma classification challenge show the effectiveness of our proposed fairness losses for bias mitigation as they improve model fairness while maintaining high classification performance. To the best of our knowledge, our work is the first attempt to incorporate these types of losses in an end-to-end training scheme for mitigating biases of visual attribute predictors. Our code is available at https://github.com/nish03/FVAP.