论文标题

对比损失的有趣特性

Intriguing Properties of Contrastive Losses

论文作者

Chen, Ting, Luo, Calvin, Li, Lala

论文摘要

我们研究对比度学习的三个有趣的特性。首先,我们将标准的对比损失概括为更广泛的损失家族,我们发现在多层非线性投影头的存在下,广义损失的各种实例化相似。其次,我们研究如果基于实例的对比度学习(具有全局图像表示)可以在具有多个对象的图像上很好地学习。我们发现,尽管这些目标在全球实例级特征上运行,但仍可以学习有意义的分层本地特征。最后,我们研究了在增强视图中共享的竞争特征(例如“颜色分布”与“对象类”)之间的特征抑制现象。我们构建具有明确且可控制的竞争功能的数据集,并表明,对于对比度学习,一些易于学习的共享功能可以抑制,甚至完全阻止学习其他竞争功能。在图像中有多个对象的方案中,主要对象将抑制较小对象的学习。现有的对比学习方法非常依赖于数据扩展来偏爱某些特征,而不是其他功能,并且可能会在现有增强无法完全解决特征抑制的情况下学习饱和度。这对现有的对比学习技术构成了开放挑战。

We study three intriguing properties of contrastive learning. First, we generalize the standard contrastive loss to a broader family of losses, and we find that various instantiations of the generalized loss perform similarly under the presence of a multi-layer non-linear projection head. Second, we study if instance-based contrastive learning (with a global image representation) can learn well on images with multiple objects present. We find that meaningful hierarchical local features can be learned despite the fact that these objectives operate on global instance-level features. Finally, we study the phenomenon of feature suppression among competing features shared across augmented views, such as "color distribution" vs "object class". We construct datasets with explicit and controllable competing features, and show that, for contrastive learning, a few bits of easy-to-learn shared features can suppress, and even fully prevent, the learning of other sets of competing features. In scenarios where there are multiple objects in an image, the dominant object would suppress the learning of smaller objects. Existing contrastive learning methods critically rely on data augmentation to favor certain sets of features over others, and could suffer from learning saturation for scenarios where existing augmentations cannot fully address the feature suppression. This poses open challenges to existing contrastive learning techniques.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源