论文标题
回顾几次学习中的低级信息
Looking back to lower-level information in few-shot learning
论文作者
论文摘要
人类能够从少量示例中学习新概念。相比之下,监督的深度学习模型通常缺乏从有限的数据方案中提取可靠的预测性规则的能力,试图对新示例进行分类。这种具有挑战性的场景通常被称为少数学习。近年来,由于其对许多现实世界中的重要性的重要性,近年来,很少有学习的注意力吸引了人们的关注。最近,依赖于元学习范式与基于图的结构相结合的新方法对示例之间的关系进行了建模,这些方法在各种少数拍摄的分类任务上显示出令人鼓舞的结果。但是,几乎没有学习的现有工作仅集中在神经网络的最后一层产生的功能嵌入。在这项工作中,我们提出了较低级别的支持信息的利用,即隐藏神经网络层的特征嵌入,以提高分类器的准确性。基于一个基于图的元学习框架,我们开发了一种称为“ Look-back”的方法,其中这种较低级别的信息用于在有限的数据设置中构造用于标签传播的其他图形。我们对两个流行的少量学习数据集Miniimagenet和Tieredimagenet进行的实验表明,我们的方法可以利用网络中的低级信息来提高最新的分类性能。
Humans are capable of learning new concepts from small numbers of examples. In contrast, supervised deep learning models usually lack the ability to extract reliable predictive rules from limited data scenarios when attempting to classify new examples. This challenging scenario is commonly known as few-shot learning. Few-shot learning has garnered increased attention in recent years due to its significance for many real-world problems. Recently, new methods relying on meta-learning paradigms combined with graph-based structures, which model the relationship between examples, have shown promising results on a variety of few-shot classification tasks. However, existing work on few-shot learning is only focused on the feature embeddings produced by the last layer of the neural network. In this work, we propose the utilization of lower-level, supporting information, namely the feature embeddings of the hidden neural network layers, to improve classifier accuracy. Based on a graph-based meta-learning framework, we develop a method called Looking-Back, where such lower-level information is used to construct additional graphs for label propagation in limited data settings. Our experiments on two popular few-shot learning datasets, miniImageNet and tieredImageNet, show that our method can utilize the lower-level information in the network to improve state-of-the-art classification performance.