论文标题
原质:一种基于“ [mask]”的新方法,用于提示调谐
Protum: A New Method For Prompt Tuning Based on "[MASK]"
论文作者
论文摘要
最近,提示调整\ cite {lester2021power}逐渐成为NLP的新范式,它仅通过冻结预训练的语言模型(PLMS)的参数来取决于单词的表示,以在下游任务上获得出色的性能。它在预训练过程中保持了蒙版语言模型(MLM)\ cite {devlin2018bert}任务的一致性,并避免了在微调过程中可能发生的一些问题。自然地,我们认为“ [掩码]令牌都比其他令牌更有用的信息,因为该模型与上下文结合以预测掩盖的令牌。在当前的及时调整方法中,当他们预测多个单词时,将在预测中随机组成答案令牌的严重问题,以便他们必须使用帮助语言式的词语将令牌映射到标签上。 In response to the above issue, we propose a new \textbf{Pro}mpt \textbf{Tu}ning based on "[\textbf{M}ASK]" (\textbf{Protum}) method in this paper, which constructs a classification task through the information carried by the hidden layer of "[MASK]" tokens and then predicts the labels directly rather than the answer tokens.同时,我们探讨了“ [掩码]”下的不同隐藏层对我们的分类模型对许多不同数据集的影响。最后,我们发现我们的\ textbf {protum}可以取得比连续预训练以更少的时间消耗后进行微调要好得多。我们的模型促进了大型模型在NLP中的实际应用。
Recently, prompt tuning \cite{lester2021power} has gradually become a new paradigm for NLP, which only depends on the representation of the words by freezing the parameters of pre-trained language models (PLMs) to obtain remarkable performance on downstream tasks. It maintains the consistency of Masked Language Model (MLM) \cite{devlin2018bert} task in the process of pre-training, and avoids some issues that may happened during fine-tuning. Naturally, we consider that the "[MASK]" tokens carry more useful information than other tokens because the model combines with context to predict the masked tokens. Among the current prompt tuning methods, there will be a serious problem of random composition of the answer tokens in prediction when they predict multiple words so that they have to map tokens to labels with the help verbalizer. In response to the above issue, we propose a new \textbf{Pro}mpt \textbf{Tu}ning based on "[\textbf{M}ASK]" (\textbf{Protum}) method in this paper, which constructs a classification task through the information carried by the hidden layer of "[MASK]" tokens and then predicts the labels directly rather than the answer tokens. At the same time, we explore how different hidden layers under "[MASK]" impact on our classification model on many different data sets. Finally, we find that our \textbf{Protum} can achieve much better performance than fine-tuning after continuous pre-training with less time consumption. Our model facilitates the practical application of large models in NLP.