论文标题
调整电子邮件对话的以任务为导向的对话模型
Adapting Task-Oriented Dialogue Models for Email Conversations
论文作者
论文摘要
意图检测是对话助手的任何自然语言理解(NLU)系统的关键部分。对于存在多个指令和意图的电子邮件对话,检测正确的意图是必不可少的,但很难。在这种设置中,对话上下文可以成为检测助手请求的关键歧义因素。合并上下文的一种突出方法是建模过去的对话历史,例如以任务为导向的对话模型。但是,电子邮件对话的性质(长形式)的性质限制了直接使用面向任务的对话模型中最新进展。因此,在本文中,我们提供了一个有效的转移学习框架(EMTOD),该框架允许对话模型中的最新发展进行长篇小说。我们表明,提出的EMTOD框架将预训练的语言模型的意图检测性能提高了45%,而预先训练的对话模型则提高了30%,以实现任务为导向的电子邮件对话。此外,所提出的框架的模块化性质允许在预先训练的语言和面向任务的对话模型中为未来的任何发展提供插件。
Intent detection is a key part of any Natural Language Understanding (NLU) system of a conversational assistant. Detecting the correct intent is essential yet difficult for email conversations where multiple directives and intents are present. In such settings, conversation context can become a key disambiguating factor for detecting the user's request from the assistant. One prominent way of incorporating context is modeling past conversation history like task-oriented dialogue models. However, the nature of email conversations (long form) restricts direct usage of the latest advances in task-oriented dialogue models. So in this paper, we provide an effective transfer learning framework (EMToD) that allows the latest development in dialogue models to be adapted for long-form conversations. We show that the proposed EMToD framework improves intent detection performance over pre-trained language models by 45% and over pre-trained dialogue models by 30% for task-oriented email conversations. Additionally, the modular nature of the proposed framework allows plug-and-play for any future developments in both pre-trained language and task-oriented dialogue models.