论文标题

在1/n神经表示和鲁棒性上

On 1/n neural representation and robustness

论文作者

Nassar, Josue, Sokol, Piotr Aleksander, Chung, SueYeon, Harris, Kenneth D., Park, Il Memming

论文摘要

了解神经网络中表示的性质是神经科学和机器学习共同的目标。因此,令人兴奋的是,这两个领域不仅在共同的问题上,而且在类似的方法上融合。在这些领域,一个紧迫的问题是了解神经网络使用的表示结构如何影响其概括和对扰动的稳健性。在这项工作中,我们通过将小鼠V1(Stringer等)与人工神经网络中神经代表的协方差谱并与实验结果并置了实验结果。我们使用对抗性的鲁棒性来探测Stringer等人关于1/N协方差谱的因果作用的理论。我们从经验上研究了这种神经代码在神经网络中赋予的好处,并阐明了其在多层体系结构中的作用。我们的结果表明,在人工神经网络上强加实验观察到的结构使它们对对抗性攻击更加强大。此外,我们的发现通过展示中间表示的作用来补充将广泛神经网络与内核方法相关的现有理论。

Understanding the nature of representation in neural networks is a goal shared by neuroscience and machine learning. It is therefore exciting that both fields converge not only on shared questions but also on similar approaches. A pressing question in these areas is understanding how the structure of the representation used by neural networks affects both their generalization, and robustness to perturbations. In this work, we investigate the latter by juxtaposing experimental results regarding the covariance spectrum of neural representations in the mouse V1 (Stringer et al) with artificial neural networks. We use adversarial robustness to probe Stringer et al's theory regarding the causal role of a 1/n covariance spectrum. We empirically investigate the benefits such a neural code confers in neural networks, and illuminate its role in multi-layer architectures. Our results show that imposing the experimentally observed structure on artificial neural networks makes them more robust to adversarial attacks. Moreover, our findings complement the existing theory relating wide neural networks to kernel methods, by showing the role of intermediate representations.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源