论文标题

自分配蒸馏:有效的不确定性估计

Self-Distribution Distillation: Efficient Uncertainty Estimation

论文作者

Fathullah, Yassir, Gales, Mark J. F.

论文摘要

深度学习越来越多地应用于安全 - 关键领域。对于这些情况,重要的是要了解模型预测中的不确定性水平,以确保系统做出适当的决定。深层合奏是获得各种不确定性度量的事实上的标准方法。但是,集合通常会大大增加培训和/或部署阶段所需的资源。已经开发出通常解决这些阶段之一的成本的方法。在这项工作中,我们提出了一种新型的培训方法,即自分配蒸馏(S2D),该方法能够有效地训练一个可以估计不确定性的单个模型。此外,可以建立这些模型的合奏并采用层次结构蒸馏方法。 CIFAR-100上的实验表明,S2D模型的表现优于标准模型和蒙特卡洛辍学。在LSUN,TINY IMATENET,SVHN上进行的其他分布式检测实验表明,即使是标准的深层合奏也可以使用基于S2D的集合和新型的蒸馏模型胜过。

Deep learning is increasingly being applied in safety-critical domains. For these scenarios it is important to know the level of uncertainty in a model's prediction to ensure appropriate decisions are made by the system. Deep ensembles are the de-facto standard approach to obtaining various measures of uncertainty. However, ensembles often significantly increase the resources required in the training and/or deployment phases. Approaches have been developed that typically address the costs in one of these phases. In this work we propose a novel training approach, self-distribution distillation (S2D), which is able to efficiently train a single model that can estimate uncertainties. Furthermore it is possible to build ensembles of these models and apply hierarchical ensemble distillation approaches. Experiments on CIFAR-100 showed that S2D models outperformed standard models and Monte-Carlo dropout. Additional out-of-distribution detection experiments on LSUN, Tiny ImageNet, SVHN showed that even a standard deep ensemble can be outperformed using S2D based ensembles and novel distilled models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源