论文标题

提示Electra:使用判别性预训练模型进行的很少的学习

Prompting ELECTRA: Few-Shot Learning with Discriminative Pre-Trained Models

论文作者

Xia, Mengzhou, Artetxe, Mikel, Du, Jingfei, Chen, Danqi, Stoyanov, Ves

论文摘要

预先训练的蒙版语言模型通过将下游任务作为文本填充来成功执行几次学习。但是,作为全镜头环境中的强大替代方案,诸如伊莱克特拉(Electra)之类的歧视性预训练模型不适合范式。在这项工作中,我们调整了基于及时的几杆学习来进行电信,并表明它在广泛的任务中优于蒙面的语言模型。 Electra是预先训练的,以区分令牌是产生还是原始。我们自然会将其扩展到基于迅速的几次学习,通过培训来评分目标选项的原创性,而无需引入新参数。我们的方法可以很容易地适应涉及多token预测的任务,而没有额外的计算开销。分析表明,Electra学习分布与下游任务更好。

Pre-trained masked language models successfully perform few-shot learning by formulating downstream tasks as text infilling. However, as a strong alternative in full-shot settings, discriminative pre-trained models like ELECTRA do not fit into the paradigm. In this work, we adapt prompt-based few-shot learning to ELECTRA and show that it outperforms masked language models in a wide range of tasks. ELECTRA is pre-trained to distinguish if a token is generated or original. We naturally extend that to prompt-based few-shot learning by training to score the originality of the target options without introducing new parameters. Our method can be easily adapted to tasks involving multi-token predictions without extra computation overhead. Analysis shows that ELECTRA learns distributions that align better with downstream tasks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源