论文标题

对对比度演示调整预先训练的语言模型

Contrastive Demonstration Tuning for Pre-trained Language Models

论文作者

Liang, Xiaozhuan, Zhang, Ningyu, Cheng, Siyuan, Zhang, Zhenru, Tan, Chuanqi, Chen, Huajun

论文摘要

验证的语言模型可以通过文本提示或演示有效地刺激,尤其是在低数据表情况下。最近的作品集中在自动搜索离散或连续提示或优化的言语方面,但有关演示的研究仍然有限。具体而言,演示示例对于迅速调整的出色最终表现至关重要。在本文中,我们提出了一种新颖的可插入,可扩展和有效的方法,称为对比度示范调整,该方法没有演示抽样。此外,建议的方法可以是:(i)插入任何以前的及时调整方法; (ii)扩展到具有大量类别的广泛分类任务。 16个数据集的实验结果表明,我们与以前的方法集成的LM-BFF和P-Tuning的方法可以产生更好的性能。代码可在https://github.com/zjunlp/promptkg/tree/main/research/demo-tuning中找到。

Pretrained language models can be effectively stimulated by textual prompts or demonstrations, especially in low-data scenarios. Recent works have focused on automatically searching discrete or continuous prompts or optimized verbalizers, yet studies for the demonstration are still limited. Concretely, the demonstration examples are crucial for an excellent final performance of prompt-tuning. In this paper, we propose a novel pluggable, extensible, and efficient approach named contrastive demonstration tuning, which is free of demonstration sampling. Furthermore, the proposed approach can be: (i) Plugged into any previous prompt-tuning approaches; (ii) Extended to widespread classification tasks with a large number of categories. Experimental results on 16 datasets illustrate that our method integrated with previous approaches LM-BFF and P-tuning can yield better performance. Code is available in https://github.com/zjunlp/PromptKG/tree/main/research/Demo-Tuning.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源