论文标题

训练浅和深度概括的噪音相互作用神经元的浅概括线性模型的新推理方法

A new inference approach for training shallow and deep generalized linear models of noisy interacting neurons

论文作者

Mahuas, Gabriel, Isacchini, Giulio, Marre, Olivier, Ferrari, Ulisse, Mora, Thierry

论文摘要

广义线性模型是预测神经元网络相关的随机活性对外部刺激的响应的最有效范例之一,并在许多大脑区域中进行了应用。但是,在处理复杂的刺激时,推断的耦合参数通常不会跨越不同的刺激统计,从而导致性能降低和爆炸不稳定性。在这里,我们制定了两步推理策略,使我们能够通过将刺激中相关性的效果与每个训练步骤中的网络相互作用分开,从而训练鲁棒的相互作用神经元的鲁棒通用线性模型。将这种方法应用于视网膜神经节细胞对复杂的视觉刺激的响应,我们表明,与经典方法相比,以这种方式训练的模型表现出改善的性能,更稳定,产生强大的相互作用网络,并且在复杂的视觉统计范围内遍布良好的范围。该方法可以扩展到深度卷积神经网络,从而导致模型具有高预测性准确性的神经元点火率及其相关性。

Generalized linear models are one of the most efficient paradigms for predicting the correlated stochastic activity of neuronal networks in response to external stimuli, with applications in many brain areas. However, when dealing with complex stimuli, the inferred coupling parameters often do not generalize across different stimulus statistics, leading to degraded performance and blowup instabilities. Here, we develop a two-step inference strategy that allows us to train robust generalized linear models of interacting neurons, by explicitly separating the effects of correlations in the stimulus from network interactions in each training step. Applying this approach to the responses of retinal ganglion cells to complex visual stimuli, we show that, compared to classical methods, the models trained in this way exhibit improved performance, are more stable, yield robust interaction networks, and generalize well across complex visual statistics. The method can be extended to deep convolutional neural networks, leading to models with high predictive accuracy for both the neuron firing rates and their correlations.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源