论文标题

通过共同的不向导来衡量神经网络的代表性鲁棒性

Measuring Representational Robustness of Neural Networks Through Shared Invariances

论文作者

Nanda, Vedant, Speicher, Till, Kolling, Camila, Dickerson, John P., Gummadi, Krishna P., Weller, Adrian

论文摘要

研究深度学习的鲁棒性的一个主要挑战是定义了给定神经网络(NN)不变的``毫无意义''扰动集。关于鲁棒性的大多数工作隐含地将人作为参考模型来定义这种扰动。我们的工作通过使用另一个参考NN来定义给定的NN的扰动集,从而为鲁棒性提供了新的观点,从而使对任何nn的参考''human nn'的依赖概括为不变。这使得衡量鲁棒性等同于衡量两个NN共享不稳定的程度,我们提出了一种称为搅拌的措施。搅拌重新调整现有的表示相似性措施,使其适合衡量共享的不向导。使用我们的度量,我们能够洞悉共享的不断增长,随着重量初始化,体系结构,损失功能和培训数据集的变化而变化。我们的实现可在:\ url {https://github.com/nvedant07/stir}中获得。

A major challenge in studying robustness in deep learning is defining the set of ``meaningless'' perturbations to which a given Neural Network (NN) should be invariant. Most work on robustness implicitly uses a human as the reference model to define such perturbations. Our work offers a new view on robustness by using another reference NN to define the set of perturbations a given NN should be invariant to, thus generalizing the reliance on a reference ``human NN'' to any NN. This makes measuring robustness equivalent to measuring the extent to which two NNs share invariances, for which we propose a measure called STIR. STIR re-purposes existing representation similarity measures to make them suitable for measuring shared invariances. Using our measure, we are able to gain insights into how shared invariances vary with changes in weight initialization, architecture, loss functions, and training dataset. Our implementation is available at: \url{https://github.com/nvedant07/STIR}.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源