论文标题

对神经控制的深度加强学习

Deep Reinforcement Learning for Neural Control

论文作者

Kim, Jimin, Shlizerman, Eli

论文摘要

我们提出了一种基于深度强化学习的神经回路控制的新方法。我们的方法通过产生外部连续刺激现有神经回路(神经调节控制)或神经回路体系结构(连接组控制)来实现瞄准行为。由于神经活动的非线性和复发性,两种形式的控制都具有挑战性。为了推断候选人控制政策,我们的方法将神经回路及其连接组映射到网格世界中,例如环境,并渗透到实现瞄准行为所需的行动。这些动作是通过适应深度Q学习方法来推断出来的,以其在网格世界中的稳健性能而闻名。我们将方法应用于\ textit {c的模型。秀丽隐杆线},用肌肉和身体模拟完整的躯体神经系统。我们的框架成功地渗透了神经肽电流和突触体系结构,以控制趋化性。我们的发现与体内测量结果一致,并为趋化性神经控制提供了更多见解。我们通过从头开始推断趋化神经回路来进一步证明我们方法的通用性和可扩展性。

We present a novel methodology for control of neural circuits based on deep reinforcement learning. Our approach achieves aimed behavior by generating external continuous stimulation of existing neural circuits (neuromodulation control) or modulations of neural circuits architecture (connectome control). Both forms of control are challenging due to nonlinear and recurrent complexity of neural activity. To infer candidate control policies, our approach maps neural circuits and their connectome into a grid-world like setting and infers the actions needed to achieve aimed behavior. The actions are inferred by adaptation of deep Q-learning methods known for their robust performance in navigating grid-worlds. We apply our approach to the model of \textit{C. elegans} which simulates the full somatic nervous system with muscles and body. Our framework successfully infers neuropeptidic currents and synaptic architectures for control of chemotaxis. Our findings are consistent with in vivo measurements and provide additional insights into neural control of chemotaxis. We further demonstrate the generality and scalability of our methods by inferring chemotactic neural circuits from scratch.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源