论文标题
CNN换档不变性是什么样的?可视化研究
What Does CNN Shift Invariance Look Like? A Visualization Study
论文作者
论文摘要
卷积神经网络(CNN)的特征提取是代表机器学习任务图像的流行方法。这些表示旨在捕获全球图像内容,理想情况下应该独立于几何变换。我们专注于测量和可视化流行现成的CNN模型中提取功能的偏移不变性。我们介绍了三个实验的结果,将数百万图像的表示与详尽的对象进行比较,研究局部不变性(在几个像素内)和全局不变性(跨图像框架)。我们得出的结论是,从流行网络中提取的功能并非全球不变,并且在此方差中存在偏见和伪像。此外,我们确定抗恶化模型可显着提高局部不变性,但不会影响全球不变性。最后,我们提供了一个用于实验复制的代码存储库,以及一个网站,可以在https://jakehlee.github.io/visalize-invariance上与我们的结果进行交互。
Feature extraction with convolutional neural networks (CNNs) is a popular method to represent images for machine learning tasks. These representations seek to capture global image content, and ideally should be independent of geometric transformations. We focus on measuring and visualizing the shift invariance of extracted features from popular off-the-shelf CNN models. We present the results of three experiments comparing representations of millions of images with exhaustively shifted objects, examining both local invariance (within a few pixels) and global invariance (across the image frame). We conclude that features extracted from popular networks are not globally invariant, and that biases and artifacts exist within this variance. Additionally, we determine that anti-aliased models significantly improve local invariance but do not impact global invariance. Finally, we provide a code repository for experiment reproduction, as well as a website to interact with our results at https://jakehlee.github.io/visualize-invariance.