论文标题

几次学习的概念学习者

Concept Learners for Few-Shot Learning

论文作者

Cao, Kaidi, Brbic, Maria, Leskovec, Jure

论文摘要

只有几个标记的示例就可以开发能够概括为新任务的算法,这是缩小机器和人类水平性能之间差距的基本挑战。人类认知的核心在于结构化的可重复使用的概念,这些概念有助于我们迅速适应新任务并在决策背后提供推理。但是,现有的元学习方法学习了跨先前标记的任务的复杂表示,而无需在学习表示上施加任何结构。在这里,我们提出了彗星,这是一种元学习方法,通过学习沿着人类解剖概念维度学习来提高概括能力。彗星没有学习联合的非结构化指标空间,而是将高级概念的映射映射到半结构化的度量空间中,并有效地结合了独立概念学习者的输出。我们在来自不同领域的几个射击任务上评估了模型,包括来自我们工作中开发的生物领域的新型数据集上的细粒图像分类,文档分类和细胞类型注释。彗星的表现明显胜过强大的元学习基线,在最具挑战性的1-Shot学习任务上取得了6-15%的相对改善,而与现有方法不同的方法在模型的预测背后提供了解释。

Developing algorithms that are able to generalize to a novel task given only a few labeled examples represents a fundamental challenge in closing the gap between machine- and human-level performance. The core of human cognition lies in the structured, reusable concepts that help us to rapidly adapt to new tasks and provide reasoning behind our decisions. However, existing meta-learning methods learn complex representations across prior labeled tasks without imposing any structure on the learned representations. Here we propose COMET, a meta-learning method that improves generalization ability by learning to learn along human-interpretable concept dimensions. Instead of learning a joint unstructured metric space, COMET learns mappings of high-level concepts into semi-structured metric spaces, and effectively combines the outputs of independent concept learners. We evaluate our model on few-shot tasks from diverse domains, including fine-grained image classification, document categorization and cell type annotation on a novel dataset from a biological domain developed in our work. COMET significantly outperforms strong meta-learning baselines, achieving 6-15% relative improvement on the most challenging 1-shot learning tasks, while unlike existing methods providing interpretations behind the model's predictions.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源