论文标题

自动编码器是否需要瓶颈来进行异常检测?

Do autoencoders need a bottleneck for anomaly detection?

论文作者

Yong, Bang Xiang, Brintrup, Alexandra

论文摘要

设计深度自动编码器(AES)是一种无监督的神经网络的普遍信念,是需要一种瓶颈来防止学习身份功能。学习身份函数使AE无用以无法进行异常检测。在这项工作中,我们挑战了这种有限的信念,并研究了非底层AES的价值。 可以通过两种方式去除瓶颈:(1)过度参数化潜在层,(2)引入跳过连接。但是,有限的作品报告了使用一种方式的使用。我们首次进行了广泛的实验,涵盖了瓶颈去除方案的各种组合,AES类型和数据集。此外,我们提出无限范围的AE作为非底层AE的极端例子。 它们对基线的改进意味着学习身份函数并不是先前假设的微不足道。此外,我们发现,在CIFAR(Inliers)与SVHN(Anomalies)(Anomalies)的流行任务上,在其他任务中,脱颖而出的范围不断发展,无法改进非牛仔布的范围,因此在CIFAR(INLIERS)与SVHN(Anomalies)的流行任务上,非瓶颈体系结构(最高的AUROC = 0.857)可以超越其瓶颈对应物(最高的AUROC = 0.696)。

A common belief in designing deep autoencoders (AEs), a type of unsupervised neural network, is that a bottleneck is required to prevent learning the identity function. Learning the identity function renders the AEs useless for anomaly detection. In this work, we challenge this limiting belief and investigate the value of non-bottlenecked AEs. The bottleneck can be removed in two ways: (1) overparameterising the latent layer, and (2) introducing skip connections. However, limited works have reported on the use of one of the ways. For the first time, we carry out extensive experiments covering various combinations of bottleneck removal schemes, types of AEs and datasets. In addition, we propose the infinitely-wide AEs as an extreme example of non-bottlenecked AEs. Their improvement over the baseline implies learning the identity function is not trivial as previously assumed. Moreover, we find that non-bottlenecked architectures (highest AUROC=0.857) can outperform their bottlenecked counterparts (highest AUROC=0.696) on the popular task of CIFAR (inliers) vs SVHN (anomalies), among other tasks, shedding light on the potential of developing non-bottlenecked AEs for improving anomaly detection.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源