论文标题
基于迅速的几声语言学习者的对比度学习
Contrastive Learning for Prompt-Based Few-Shot Language Learners
论文作者
论文摘要
GPT-3使用自然语言提示和内在学习的令人印象深刻的表现激发了在此范式下更好地调整适中模型的工作。在这项工作之后,我们提出了一个对比学习框架,该框架将同一类输入的输入,以更好地使用有限的示例训练的模型。具体来说,我们提出了一个受监督的对比框架,该框架在不同的增强“视图”下从同一类中输入,并从不同类中排斥这些框架。我们通过使用不同的语言提示和上下文演示来附加示例来创建示例的不同的“视图”。实验结果将对比度损失与标准蒙版语言建模(MLM)损失相结合,实验结果表明,我们的方法可以改善15个语言任务的各种方法中的最新方法。我们的框架对任务或基本模型的假设最少,并且可以应用于许多最新方法,但几乎没有修改。该代码将在以下网址提供:https://github.com/yiren-jian/lm-supcon。
The impressive performance of GPT-3 using natural language prompts and in-context learning has inspired work on better fine-tuning of moderately-sized models under this paradigm. Following this line of work, we present a contrastive learning framework that clusters inputs from the same class for better generality of models trained with only limited examples. Specifically, we propose a supervised contrastive framework that clusters inputs from the same class under different augmented "views" and repel the ones from different classes. We create different "views" of an example by appending it with different language prompts and contextual demonstrations. Combining a contrastive loss with the standard masked language modeling (MLM) loss in prompt-based few-shot learners, the experimental results show that our method can improve over the state-of-the-art methods in a diverse set of 15 language tasks. Our framework makes minimal assumptions on the task or the base model, and can be applied to many recent methods with little modification. The code will be made available at: https://github.com/yiren-jian/LM-SupCon.