论文标题

通过损失最小化的认知不确定性定量的陷阱

Pitfalls of Epistemic Uncertainty Quantification through Loss Minimisation

论文作者

Bengs, Viktor, Hüllermeier, Eyke, Waegeman, Willem

论文摘要

在最近,不确定性量化在机器学习中引起了越来越多的关注。特别是,在这方面发现了核心和认知不确定性之间的区别。后者是指学习者(缺乏)知识,并且似乎特别难以衡量和量化。在本文中,我们根据二阶学习者的想法分析了一项最新建议,该建议以分布形式而不是概率分布的形式产生预测。尽管可以培训标准(一阶)学习者以预测准确的概率,即通过最大程度地减少样本数据的合适损失函数,但我们表明,最小化损失最小化对二阶预测指标不起作用:提出的损失函数诱导这种预测指标并没有激励学习者以忠实的方式表示其认知性不确定。

Uncertainty quantification has received increasing attention in machine learning in the recent past. In particular, a distinction between aleatoric and epistemic uncertainty has been found useful in this regard. The latter refers to the learner's (lack of) knowledge and appears to be especially difficult to measure and quantify. In this paper, we analyse a recent proposal based on the idea of a second-order learner, which yields predictions in the form of distributions over probability distributions. While standard (first-order) learners can be trained to predict accurate probabilities, namely by minimising suitable loss functions on sample data, we show that loss minimisation does not work for second-order predictors: The loss functions proposed for inducing such predictors do not incentivise the learner to represent its epistemic uncertainty in a faithful way.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源