论文标题

通过草图提高图像分类的公平性

Improving Fairness in Image Classification via Sketching

论文作者

Yao, Ruichen, Cui, Ziteng, Li, Xiaoxiao, Gu, Lin

论文摘要

公平是值得信赖和以人为中心的人工智能(AI)系统的基本要求。但是,当从具有不同属性(即颜色,性别,年龄)中收集训练数据时,深度神经网络(DNN)倾向于做出不公平的预测,从而导致DNN的偏见。我们注意到,这种令人不安的现象通常是由数据本身引起的,这意味着偏见信息与有用的信息(即类信息,语义信息)一起编码到DNN。因此,我们建议使用草图来处理这一现象。在不丢失数据效用的情况下,我们探索了图像到缝制的方法,这些方法可以在滤除无用的偏差信息的同时维护有用的语义信息以进行目标分类。此外,我们设计了一个公平的损失,以进一步提高模型公平。我们通过在通用场景数据集和医疗场景数据集上进行广泛的实验来评估我们的方法。我们的结果表明,所需的图像到缝制方法可提高模型的公平性,并在最新的最新结果中取得令人满意的结果。

Fairness is a fundamental requirement for trustworthy and human-centered Artificial Intelligence (AI) system. However, deep neural networks (DNNs) tend to make unfair predictions when the training data are collected from different sub-populations with different attributes (i.e. color, sex, age), leading to biased DNN predictions. We notice that such a troubling phenomenon is often caused by data itself, which means that bias information is encoded to the DNN along with the useful information (i.e. class information, semantic information). Therefore, we propose to use sketching to handle this phenomenon. Without losing the utility of data, we explore the image-to-sketching methods that can maintain useful semantic information for the target classification while filtering out the useless bias information. In addition, we design a fair loss to further improve the model fairness. We evaluate our method through extensive experiments on both general scene dataset and medical scene dataset. Our results show that the desired image-to-sketching method improves model fairness and achieves satisfactory results among state-of-the-art.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源