论文标题
关于以任务为导向对话的混合知识来源的联合推理
Joint Reasoning on Hybrid-knowledge sources for Task-Oriented Dialog
论文作者
论文摘要
专为以任务为导向的对话框设计的传统系统仅利用结构化知识源中存在的知识来产生响应。但是,生成响应所需的相关信息也可能存在于非结构化来源(例如文档)中。诸如Hyknow和Seknow之类的艺术模型的最新状态旨在克服这些挑战,因此限制了对知识来源的假设。例如,这些系统假定某些类型的信息(例如电话号码)始终存在于结构化知识库(KB)中,而文档中始终将提供有关入口票价等方面的信息。 在本文中,我们创建了Seknow制备的基于Mutliwoz的数据集的修改版本,以证明当删除有关信息源的严格假设时,当前方法的性能如何显着降解。然后,根据利用预训练的语言模型的最新工作,我们使用提示来调整基于巴特的模型,以查询知识源的任务以及响应生成的任务,而无需对每个知识源中存在的信息做出假设。通过一系列实验,我们证明了我们的模型对知识模式(信息源)的扰动是可靠的,并且它可以从结构化的以及非结构化的知识中融合信息以生成响应。
Traditional systems designed for task oriented dialog utilize knowledge present only in structured knowledge sources to generate responses. However, relevant information required to generate responses may also reside in unstructured sources, such as documents. Recent state of the art models such as HyKnow and SeKnow aimed at overcoming these challenges make limiting assumptions about the knowledge sources. For instance, these systems assume that certain types of information, such as a phone number, is always present in a structured knowledge base (KB) while information about aspects such as entrance ticket prices, would always be available in documents. In this paper, we create a modified version of the MutliWOZ-based dataset prepared by SeKnow to demonstrate how current methods have significant degradation in performance when strict assumptions about the source of information are removed. Then, in line with recent work exploiting pre-trained language models, we fine-tune a BART based model using prompts for the tasks of querying knowledge sources, as well as, for response generation, without making assumptions about the information present in each knowledge source. Through a series of experiments, we demonstrate that our model is robust to perturbations to knowledge modality (source of information), and that it can fuse information from structured as well as unstructured knowledge to generate responses.