论文标题
关于人工神经网络中不确定性的基本问题
Fundamental Issues Regarding Uncertainties in Artificial Neural Networks
论文作者
论文摘要
人工神经网络(ANN)实施了一种特定形式的多变量外推形式,即使没有类似的训练模式,也会为任何输入模式产生输出。推断不一定是值得信赖的,为了支持安全关键系统,我们需要此类系统来指示与其产量相关的培训样本相关的不确定性。一些读者可能会认为这是一个众所周知的问题,已经被模式识别的基本原则所涵盖。我们将在下面解释情况如何,以及分类条件概率的常规(可能性估计值)如何无法正确评估这种不确定性。我们提供了有关此问题的标准解释的讨论,并展示了如何实际应用基于长期存在方法的定量方法。这些方法用于使用磁共振成像早期诊断痴呆疾病的任务。
Artificial Neural Networks (ANNs) implement a specific form of multi-variate extrapolation and will generate an output for any input pattern, even when there is no similar training pattern. Extrapolations are not necessarily to be trusted, and in order to support safety critical systems, we require such systems to give an indication of the training sample related uncertainty associated with their output. Some readers may think that this is a well known issue which is already covered by the basic principles of pattern recognition. We will explain below how this is not the case and how the conventional (Likelihood estimate of) conditional probability of classification does not correctly assess this uncertainty. We provide a discussion of the standard interpretations of this problem and show how a quantitative approach based upon long standing methods can be practically applied. The methods are illustrated on the task of early diagnosis of dementing diseases using Magnetic Resonance Imaging.