论文标题

通过噪声稳定性正规化改善预训练的语言模型微调

Improving Pre-trained Language Model Fine-tuning with Noise Stability Regularization

论文作者

Hua, Hang, Li, Xingjian, Dou, Dejing, Xu, Cheng-Zhong, Luo, Jiebo

论文摘要

大型预训练的语言模型的出现为自然语言处理的最新进展做出了巨大贡献。许多最先进的语言模型首先在大型文本语料库上进行培训,然后在下游任务上进行微调。尽管它最近的成功和广泛的采用,但对预训练的语言模型的微调通常会遭受过度拟合,这会导致较差的普遍性,因为该模型的复杂性极高,并且来自下游任务的训练样本有限。为了解决这个问题,我们提出了一个新颖有效的微调框架,称为Layerwise噪声稳定性正则化(LNSR)。具体而言,我们建议注入标准的高斯噪声或势内噪声,并将微调模型的隐藏表示形式定向。我们首先提供理论分析以支持我们方法的功效。然后,我们证明了所提出的方法的优点,而不是其他最先进的算法,包括L2-SP,MixOut和Smart。尽管这些先前的作品仅验证其方法在相对简单的文本分类任务上的有效性,但我们还验证了我们方法对问题回答任务的有效性,而目标问题更加困难,并且提供了更多的培训示例。此外,广泛的实验结果表明,所提出的算法不仅可以提高语言模型的内域性能,而且可以提高域外域数据的域概括性能。

The advent of large-scale pre-trained language models has contributed greatly to the recent progress in natural language processing. Many state-of-the-art language models are first trained on a large text corpus and then fine-tuned on downstream tasks. Despite its recent success and wide adoption, fine-tuning a pre-trained language model often suffers from overfitting, which leads to poor generalizability due to the extremely high complexity of the model and the limited training samples from downstream tasks. To address this problem, we propose a novel and effective fine-tuning framework, named Layerwise Noise Stability Regularization (LNSR). Specifically, we propose to inject the standard Gaussian noise or In-manifold noise and regularize hidden representations of the fine-tuned model. We first provide theoretical analyses to support the efficacy of our method. We then demonstrate the advantages of the proposed method over other state-of-the-art algorithms including L2-SP, Mixout and SMART. While these previous works only verify the effectiveness of their methods on relatively simple text classification tasks, we also verify the effectiveness of our method on question answering tasks, where the target problem is much more difficult and more training examples are available. Furthermore, extensive experimental results indicate that the proposed algorithm can not only enhance the in-domain performance of the language models but also improve the domain generalization performance on out-of-domain data.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源