论文标题

有条件的自适应多任务学习:使用更少的参数和更少的数据改善NLP的转移学习

Conditionally Adaptive Multi-Task Learning: Improving Transfer Learning in NLP Using Fewer Parameters & Less Data

论文作者

Pilault, Jonathan, Elhattami, Amine, Pal, Christopher

论文摘要

多任务学习(MTL)网络已成为一种有前途的方法,用于将学习知识跨不同任务传输。但是,MTL必须应对以下挑战:对低资源任务,灾难性遗忘和负面任务转移或学习干扰。通常,在自然语言处理(NLP)中,需要每个任务单独的模型来获得最佳性能。但是,许多微调方法既是参数效率低下,即可能涉及每个任务的一个新模型,并且非常容易受到训练期间获得的知识的影响。我们提出了一种新型的变压器架构,该结构包括一种新的条件注意机制以及一组促进重量共享的任务条件模块。通过这种构造(超网络适配器),我们通过保持预验证模型的一半权重来实现更有效的参数共享和减轻遗忘。我们还使用新的多任务数据采样策略来减轻跨任务数据不平衡的负面影响。使用这种方法,我们能够在参数和数据效率上超越单个任务微调方法(使用大约66%的数据进行重量更新)。与胶水上的其他大型方法相比,我们的8任任务模型超过了其他适配器方法2.8%,我们的24任任务模型的表现优于使用MTL和单个任务微调的0.7-1.0%模型。我们表明,我们的单个多任务模型方法的较大变体在26个NLP任务中竞争性地执行,并在许多测试和开发集中产生最新的结果。我们的代码可在https://github.com/camtl/ca-mtl上公开获取。

Multi-Task Learning (MTL) networks have emerged as a promising method for transferring learned knowledge across different tasks. However, MTL must deal with challenges such as: overfitting to low resource tasks, catastrophic forgetting, and negative task transfer, or learning interference. Often, in Natural Language Processing (NLP), a separate model per task is needed to obtain the best performance. However, many fine-tuning approaches are both parameter inefficient, i.e., potentially involving one new model per task, and highly susceptible to losing knowledge acquired during pretraining. We propose a novel Transformer architecture consisting of a new conditional attention mechanism as well as a set of task-conditioned modules that facilitate weight sharing. Through this construction (a hypernetwork adapter), we achieve more efficient parameter sharing and mitigate forgetting by keeping half of the weights of a pretrained model fixed. We also use a new multi-task data sampling strategy to mitigate the negative effects of data imbalance across tasks. Using this approach, we are able to surpass single task fine-tuning methods while being parameter and data efficient (using around 66% of the data for weight updates). Compared to other BERT Large methods on GLUE, our 8-task model surpasses other Adapter methods by 2.8% and our 24-task model outperforms by 0.7-1.0% models that use MTL and single task fine-tuning. We show that a larger variant of our single multi-task model approach performs competitively across 26 NLP tasks and yields state-of-the-art results on a number of test and development sets. Our code is publicly available at https://github.com/CAMTL/CA-MTL.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源