论文标题

与对比度学习的几个射击分类

Few-Shot Classification with Contrastive Learning

论文作者

Yang, Zhanyuan, Wang, Jinghua, Zhu, Yingying

论文摘要

由顺序训练和元训练阶段组成的两阶段训练范式已广泛用于当前的几次学习(FSL)研究。这些方法中的许多方法都使用自我监督的学习和对比学习来实现新的最新结果。但是,在FSL培训范式的两个阶段,对比度学习的潜力仍未得到充分利用。在本文中,我们提出了一个新颖的基于学习的框架,该框架将对比度学习无缝地整合到两个阶段中,以提高少量分类的性能。在预训练阶段,我们提出了特征向量与特征映射和特征映射与特征映射的形式的自我监督对比损失,该映射使用全局和本地信息来学习良好的初始表示形式。在元训练阶段,我们提出了一种跨视图情节训练机制,以对同一情节的两个不同视图进行最接近的质心分类,并采用基于距离的距离对比损失。这两种策略迫使模型克服观点之间的偏见并促进表示形式的可转移性。在三个基准数据集上进行的广泛实验表明,我们的方法可实现竞争成果。

A two-stage training paradigm consisting of sequential pre-training and meta-training stages has been widely used in current few-shot learning (FSL) research. Many of these methods use self-supervised learning and contrastive learning to achieve new state-of-the-art results. However, the potential of contrastive learning in both stages of FSL training paradigm is still not fully exploited. In this paper, we propose a novel contrastive learning-based framework that seamlessly integrates contrastive learning into both stages to improve the performance of few-shot classification. In the pre-training stage, we propose a self-supervised contrastive loss in the forms of feature vector vs. feature map and feature map vs. feature map, which uses global and local information to learn good initial representations. In the meta-training stage, we propose a cross-view episodic training mechanism to perform the nearest centroid classification on two different views of the same episode and adopt a distance-scaled contrastive loss based on them. These two strategies force the model to overcome the bias between views and promote the transferability of representations. Extensive experiments on three benchmark datasets demonstrate that our method achieves competitive results.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源