论文标题

2020年代自然语言处理的多任务学习:我们要去哪里?

Multi-task learning for natural language processing in the 2020s: where are we going?

论文作者

Worsham, Joseph, Kalita, Jugal

论文摘要

多任务学习(MTL)显着早于深度学习时代,并且在过去的几年中,由于研究人员一直将MTL应用于自然语言任务的深度学习解决方案。尽管始终存在稳定的MTL研究,但在转移学习和预训练的相关领域(例如BERT)以及发布新的挑战问题(例如Glue和NLP Decathlon(DecAnlp))的相关领域中发表的令人印象深刻的成功驱动的兴趣日益增长。这些努力将更多的重点放在如何在网络之间共享权重,评估网络组件的可重复使用性,并确定MTL可以显着超过单任务解决方案的用例。本文努力对自然语言处理领域的众多MTL贡献进行全面调查,并提供一个论坛,将精力集中在未来十年中最严重的未解决问题上。尽管不断产生了提高NLP基准性能的新型模型,但持久的MTL挑战仍未解决,这可能是更好地理解语言理解,知识发现和自然语言界面的关键。

Multi-task learning (MTL) significantly pre-dates the deep learning era, and it has seen a resurgence in the past few years as researchers have been applying MTL to deep learning solutions for natural language tasks. While steady MTL research has always been present, there is a growing interest driven by the impressive successes published in the related fields of transfer learning and pre-training, such as BERT, and the release of new challenge problems, such as GLUE and the NLP Decathlon (decaNLP). These efforts place more focus on how weights are shared across networks, evaluate the re-usability of network components and identify use cases where MTL can significantly outperform single-task solutions. This paper strives to provide a comprehensive survey of the numerous recent MTL contributions to the field of natural language processing and provide a forum to focus efforts on the hardest unsolved problems in the next decade. While novel models that improve performance on NLP benchmarks are continually produced, lasting MTL challenges remain unsolved which could hold the key to better language understanding, knowledge discovery and natural language interfaces.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源