论文标题

部分可观测时空混沌系统的无模型预测

Can a Hebbian-like learning rule be avoiding the curse of dimensionality in sparse distributed data?

论文作者

Osório, Maria, Sa-Couto, Luís, Wichert, Andreas

论文摘要

通常假定大脑使用类似于稀疏分布式表示的东西。但是,这些表示是高维的,因此由于“维度的诅咒”而影响传统机器学习模型的分类表现。在有大量标记数据的任务中,深层网络似乎通过许多层和非赫比亚反向传播算法解决了这个问题。但是,大脑似乎能够用几层解决问题。在这项工作中,我们假设这是通过使用Hebbian学习而发生的。实际上,限制性玻尔兹曼机器的类似于Hebbian的学习规则是不对称地学习输入模式的。它专门学习非零值之间的相关性,而忽略了代表绝大多数输入维度的零。通过忽略零“维度的诅咒”问题。为了检验我们的假设,我们生成了几个稀疏数据集,并将限制性玻尔兹曼机器分类器的性能与一些反向培训的网络进行了比较。使用这些代码的实验证实了我们的初始直觉,因为受限的玻尔兹曼机器显示出良好的概括性能,而使用反向传播算法训练的神经网络则过于培训数据。

It is generally assumed that the brain uses something akin to sparse distributed representations. These representations, however, are high-dimensional and consequently they affect classification performance of traditional Machine Learning models due to "the curse of dimensionality". In tasks for which there is a vast amount of labeled data, Deep Networks seem to solve this issue with many layers and a non-Hebbian backpropagation algorithm. The brain, however, seems to be able to solve the problem with few layers. In this work, we hypothesize that this happens by using Hebbian learning. Actually, the Hebbian-like learning rule of Restricted Boltzmann Machines learns the input patterns asymmetrically. It exclusively learns the correlation between non-zero values and ignores the zeros, which represent the vast majority of the input dimensionality. By ignoring the zeros "the curse of dimensionality" problem can be avoided. To test our hypothesis, we generated several sparse datasets and compared the performance of a Restricted Boltzmann Machine classifier with some Backprop-trained networks. The experiments using these codes confirm our initial intuition as the Restricted Boltzmann Machine shows a good generalization performance, while the Neural Networks trained with the backpropagation algorithm overfit the training data.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源