论文标题
使用全力培训在反馈驱动的反复峰值神经网络中学习
Learning in Feedback-driven Recurrent Spiking Neural Networks using full-FORCE Training
论文作者
论文摘要
反馈驱动的复发性尖峰神经网络(RSNN)是可以模拟动态系统的强大计算模型。但是,从读数到复发层的反馈回路的存在使学习机制稳定并防止其融合。在这里,我们为RSNN提出了监督的培训程序,其中仅在培训期间引入第二个网络,以提供目标动态的提示。提出的培训程序包括生成复发和读数层的目标(即,用于完整的RSNN系统)。它使用基于递归的最小平方的一阶和减少控制误差(力)算法来适应每一层的活性。所提出的全力训练过程减少了将输出和目标之间的误差保持在接近零的误差所需的修改量。这些修改控制了反馈回路,这导致训练融合。我们证明了使用具有泄漏的集成和火(LIF)神经元和速率编码的RSNN和速率编码的RSNN,证明了提出的全力训练程序的性能和噪声稳健性。对于节能硬件实施,为全力训练程序实施了替代的替代时间峰值(TTFS)编码。与速率编码相比,使用TTFS编码的全力产生的尖峰更少,并促进了更快的收敛到目标动力学。
Feedback-driven recurrent spiking neural networks (RSNNs) are powerful computational models that can mimic dynamical systems. However, the presence of a feedback loop from the readout to the recurrent layer de-stabilizes the learning mechanism and prevents it from converging. Here, we propose a supervised training procedure for RSNNs, where a second network is introduced only during the training, to provide hint for the target dynamics. The proposed training procedure consists of generating targets for both recurrent and readout layers (i.e., for a full RSNN system). It uses the recursive least square-based First-Order and Reduced Control Error (FORCE) algorithm to fit the activity of each layer to its target. The proposed full-FORCE training procedure reduces the amount of modifications needed to keep the error between the output and target close to zero. These modifications control the feedback loop, which causes the training to converge. We demonstrate the improved performance and noise robustness of the proposed full-FORCE training procedure to model 8 dynamical systems using RSNNs with leaky integrate and fire (LIF) neurons and rate coding. For energy-efficient hardware implementation, an alternative time-to-first-spike (TTFS) coding is implemented for the full- FORCE training procedure. Compared to rate coding, full-FORCE with TTFS coding generates fewer spikes and facilitates faster convergence to the target dynamics.