论文标题

朝向深度异常检测

Towards Fair Deep Anomaly Detection

论文作者

Zhang, Hongjing, Davidson, Ian

论文摘要

异常检测旨在找到被认为是不寻常的实例,并且是数据科学的基本问题。最近,显示深度异常检测方法可在复杂的数据(例如图像)中取得优异的结果。我们的工作着重于深度一级分类,以进行异常检测,该分类仅从正常样品中学习映射。但是,深度学习进行的非线性转化可能会发现与社会偏见相关的模式。为深度异常检测增加公平性的挑战是确保同时做出公平和纠正异常预测。在本文中,我们为公平异常检测方法(Deep Fair SVDD)提出了一种新的体系结构,并使用对抗网络对其进行训练,以消除敏感属性与学习的表示之间的关系。这与通常添加的公平程度(即正规机或约束)不同。此外,我们提出了两项​​有效的公平措施,并从经验上证明了现有的深度异常检测方法是不公平的。我们表明,我们提出的方法可以消除不公平性,而对异常检测性能的损失最小。最后,我们进行了深入的分析,以显示我们提出的模型的强度和局限性,包括参数分析,特征可视化和运行时分析。

Anomaly detection aims to find instances that are considered unusual and is a fundamental problem of data science. Recently, deep anomaly detection methods were shown to achieve superior results particularly in complex data such as images. Our work focuses on deep one-class classification for anomaly detection which learns a mapping only from the normal samples. However, the non-linear transformation performed by deep learning can potentially find patterns associated with social bias. The challenge with adding fairness to deep anomaly detection is to ensure both making fair and correct anomaly predictions simultaneously. In this paper, we propose a new architecture for the fair anomaly detection approach (Deep Fair SVDD) and train it using an adversarial network to de-correlate the relationships between the sensitive attributes and the learned representations. This differs from how fairness is typically added namely as a regularizer or a constraint. Further, we propose two effective fairness measures and empirically demonstrate that existing deep anomaly detection methods are unfair. We show that our proposed approach can remove the unfairness largely with minimal loss on the anomaly detection performance. Lastly, we conduct an in-depth analysis to show the strength and limitations of our proposed model, including parameter analysis, feature visualization, and run-time analysis.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源