论文标题
嘈杂训练集的神经网络中的增量学习的实证研究
An Empirical Study of Incremental Learning in Neural Network with Noisy Training Set
论文作者
论文摘要
增量学习的概念是在较新的培训数据到达时分阶段训练ANN算法。随着深度学习的出现,近来的增量学习变得广泛。训练数据中的噪声降低了算法的准确性。在本文中,我们对训练阶段噪声的影响进行了经验研究。我们从数值上表明,算法的准确性更多地取决于误差的位置,而不是误差百分比。使用perceptron,feed向前神经网络和径向基函数神经网络,我们表明,对于相同百分比的误差百分比,算法的准确性随误差位置显着变化。此外,我们的结果表明,准确性与误差位置的依赖性与算法无关。但是,降解曲线的斜率随着更复杂的算法而降低
The notion of incremental learning is to train an ANN algorithm in stages, as and when newer training data arrives. Incremental learning is becoming widespread in recent times with the advent of deep learning. Noise in the training data reduces the accuracy of the algorithm. In this paper, we make an empirical study of the effect of noise in the training phase. We numerically show that the accuracy of the algorithm is dependent more on the location of the error than the percentage of error. Using Perceptron, Feed Forward Neural Network and Radial Basis Function Neural Network, we show that for the same percentage of error, the accuracy of the algorithm significantly varies with the location of error. Furthermore, our results show that the dependence of the accuracy with the location of error is independent of the algorithm. However, the slope of the degradation curve decreases with more sophisticated algorithms