论文标题

GCT:半监督的少量学习的图形共同训练

GCT: Graph Co-Training for Semi-Supervised Few-Shot Learning

论文作者

Xu, Rui, Xing, Lei, Shao, Shuai, Zhao, Lifei, Liu, Baodi, Liu, Weifeng, Zhou, Yicong

论文摘要

近年来,很少有几次学习(FSL)引起了很大的关注。流行的FSL框架包含两个阶段:(i)训练阶段采用基本数据来训练基于CNN的功能提取器。 (ii)元检验阶段将冷冻特征提取器应用于新数据(新数据与基本数据不同),并设计了一个分类器以识别。为了纠正少数数据分布,研究人员通过引入未标记的数据提出了半监督几次学习(SSFSL)。尽管已被证明SSFSL在FSL社区中取得了出色的性能,但仍然存在一个基本问题:由于跨类别设置,预先训练的特征提取器无法完美地适应新的数据。通常,将大量噪音引入新功能。我们将其视为特征提取器 - 玛拉杜治疗(FEM)问题。为了解决FEM,我们在本文中做出了两项努力。首先,我们提出了一种新型的标签预测方法,即孤立的图学习(IGL)。 IGL引入了Laplacian操作员,以将原始数据编码为图形空间,这有助于减少对分类时对特征的依赖,然后将项目图表示以标记预测空间。关键点是:IGL可以从特征表示角度削弱噪声的负面影响,并且也可以灵活地完成适合SSFSL的独立完成训练和测试程序。其次,我们建议通过将所提出的IGL扩展到共同训练框架,从多模式融合的角度来解决这一挑战。 GCT是一种半监督的方法,可利用具有两个模态特征的未标记样品来跨性增强IGL分类器。

Few-shot learning (FSL), purposing to resolve the problem of data-scarce, has attracted considerable attention in recent years. A popular FSL framework contains two phases: (i) the pre-train phase employs the base data to train a CNN-based feature extractor. (ii) the meta-test phase applies the frozen feature extractor to novel data (novel data has different categories from base data) and designs a classifier for recognition. To correct few-shot data distribution, researchers propose Semi-Supervised Few-Shot Learning (SSFSL) by introducing unlabeled data. Although SSFSL has been proved to achieve outstanding performances in the FSL community, there still exists a fundamental problem: the pre-trained feature extractor can not adapt to the novel data flawlessly due to the cross-category setting. Usually, large amounts of noises are introduced to the novel feature. We dub it as Feature-Extractor-Maladaptive (FEM) problem. To tackle FEM, we make two efforts in this paper. First, we propose a novel label prediction method, Isolated Graph Learning (IGL). IGL introduces the Laplacian operator to encode the raw data to graph space, which helps reduce the dependence on features when classifying, and then project graph representation to label space for prediction. The key point is that: IGL can weaken the negative influence of noise from the feature representation perspective, and is also flexible to independently complete training and testing procedures, which is suitable for SSFSL. Second, we propose Graph Co-Training (GCT) to tackle this challenge from a multi-modal fusion perspective by extending the proposed IGL to the co-training framework. GCT is a semi-supervised method that exploits the unlabeled samples with two modal features to crossly strengthen the IGL classifier.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源