论文标题

XPROMPT:探索及时调整的极端

XPrompt: Exploring the Extreme of Prompt Tuning

论文作者

Ma, Fang, Zhang, Chen, Ren, Lei, Wang, Jingang, Wang, Qifan, Wu, Wei, Quan, Xiaojun, Song, Dawei

论文摘要

及时调整软件提示,以调节冷冻预训练的语言模型(PLM),以以参数有效的方式执行下游任务。尽管随着模型量表的增加,及时调整逐渐达到了微调的性能水平,但对于中等和小尺度的模型(通常小于11B参数),及时调整和微调之间仍然存在较大的性能差距。在本文中,我们从经验上表明,受过训练的提示令牌可能会对下游任务产生负面影响,从而降低其性能。为了弥合差距,我们提出了一个新颖的及时调整模型,该模型在彩票票证假设方面具有极小规模(XPROMPT)。具体而言,Xprompt通过层次结构化的修剪消除了不同粒度水平的负提示令牌,并具有竞争性能,从而产生了更具参数效率的提示。全面的实验是在超级工作任务上进行的,并且广泛的结果表明,Xprompt能够在较小的型号下缩小性能差距。

Prompt tuning learns soft prompts to condition frozen Pre-trained Language Models (PLMs) for performing downstream tasks in a parameter-efficient manner. While prompt tuning has gradually reached the performance level of fine-tuning as the model scale increases, there is still a large performance gap between prompt tuning and fine-tuning for models of moderate and small scales (typically less than 11B parameters). In this paper, we empirically show that the trained prompt tokens can have a negative impact on a downstream task and thus degrade its performance. To bridge the gap, we propose a novel Prompt tuning model with an eXtremely small scale (XPrompt) under the regime of lottery tickets hypothesis. Specifically, XPrompt eliminates the negative prompt tokens at different granularity levels through a hierarchical structured pruning, yielding a more parameter-efficient prompt yet with a competitive performance. Comprehensive experiments are carried out on SuperGLUE tasks, and the extensive results indicate that XPrompt is able to close the performance gap at smaller model scales.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源