论文标题
选择性有害的选择性:评估班级选择性在DNN中的因果影响
Selectivity considered harmful: evaluating the causal impact of class selectivity in DNNs
论文作者
论文摘要
经常分析单个神经元的特性,以了解它们嵌入的生物学和人工神经网络。类别选择性定义为通常用于此目的的不同类别的刺激或数据样本的神经元的反应是如何不同类别的刺激或数据样本的。但是,对于深层神经网络(DNN)学习单位的班级选择性是否必要和/或足够,这仍然是一个悬而未决的问题。我们通过直接对班级选择性进行正规化或反对班级选择性,研究了类选择性对网络功能的因果影响。使用此正常器降低卷积神经网络中的单元的类选择性,在小型成像网中训练的RESNET18将测试精度提高了2%以上。对于在CIFAR10训练的RESNET20,我们可以将类选择性降低2.5倍,而对测试准确性没有影响,并且仅将其降低到零($ \ sim $ 2%)的测试准确性下降。相比之下,正式化以提高类选择性会显着降低所有模型和数据集的测试准确性。这些结果表明,单个单位中的类选择性既不足够,也不是严格必要的,甚至可能损害DNN的性能。当关注单个单位的性质作为DNNS功能的机制的代表时,他们还鼓励谨慎。
The properties of individual neurons are often analyzed in order to understand the biological and artificial neural networks in which they're embedded. Class selectivity-typically defined as how different a neuron's responses are across different classes of stimuli or data samples-is commonly used for this purpose. However, it remains an open question whether it is necessary and/or sufficient for deep neural networks (DNNs) to learn class selectivity in individual units. We investigated the causal impact of class selectivity on network function by directly regularizing for or against class selectivity. Using this regularizer to reduce class selectivity across units in convolutional neural networks increased test accuracy by over 2% for ResNet18 trained on Tiny ImageNet. For ResNet20 trained on CIFAR10 we could reduce class selectivity by a factor of 2.5 with no impact on test accuracy, and reduce it nearly to zero with only a small ($\sim$2%) drop in test accuracy. In contrast, regularizing to increase class selectivity significantly decreased test accuracy across all models and datasets. These results indicate that class selectivity in individual units is neither sufficient nor strictly necessary, and can even impair DNN performance. They also encourage caution when focusing on the properties of single units as representative of the mechanisms by which DNNs function.