论文标题

最大化损失:通过替换损失和校准,有效改善分布外检测和不确定性估计

Distinction Maximization Loss: Efficiently Improving Out-of-Distribution Detection and Uncertainty Estimation by Replacing the Loss and Calibrating

论文作者

Macêdo, David, Zanchettin, Cleber, Ludermir, Teresa

论文摘要

建立强大的确定性神经网络仍然是一个挑战。一方面,某些方法以降低某些情况下的分类精度为代价改善了分布检测。另一方面,某些方法同时提高了分类准确性,不确定性估计和分布外检测,但以降低推理效率为代价。在本文中,我们建议使用我们的Dismax丢失提出训练确定性的神经网络,这是对通常的软马克斯丢失的置换式替代品(即线性输出层的组合,软磁性激活和交叉透镜损失)。从Isomax+损失开始,我们根据所有原型的距离创建每个logit,而不仅仅是与正确类关联的logit。我们还引入了一种结合图像的机制,以构建所谓的分数概率正则化。此外,我们提出了一种快速训练后校准网络的快速方法。最后,我们提出一个复合分数以执行分布外检测。我们的实验表明,在分类准确性,不确定性估计和分布外检测方面,DIMAX通常超过当前方法,同时保持确定性的神经网络推理效率。重现结果的代码可在https://github.com/dlmacedo/distinction-maximization-loss上获得。

Building robust deterministic neural networks remains a challenge. On the one hand, some approaches improve out-of-distribution detection at the cost of reducing classification accuracy in some situations. On the other hand, some methods simultaneously increase classification accuracy, uncertainty estimation, and out-of-distribution detection at the expense of reducing the inference efficiency. In this paper, we propose training deterministic neural networks using our DisMax loss, which works as a drop-in replacement for the usual SoftMax loss (i.e., the combination of the linear output layer, the SoftMax activation, and the cross-entropy loss). Starting from the IsoMax+ loss, we create each logit based on the distances to all prototypes, rather than just the one associated with the correct class. We also introduce a mechanism to combine images to construct what we call fractional probability regularization. Moreover, we present a fast way to calibrate the network after training. Finally, we propose a composite score to perform out-of-distribution detection. Our experiments show that DisMax usually outperforms current approaches simultaneously in classification accuracy, uncertainty estimation, and out-of-distribution detection while maintaining deterministic neural network inference efficiency. The code to reproduce the results is available at https://github.com/dlmacedo/distinction-maximization-loss.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源