论文标题

通过结构化元学习在不经常的句子函数上产生对话

Dialogue Generation on Infrequent Sentence Functions via Structured Meta-Learning

论文作者

Gao, Yifan, Li, Piji, Bi, Wei, Liu, Xiaojiang, Lyu, Michael R., King, Irwin

论文摘要

句子功能是指出句子的交流目的的重要语言特征。将句子功能纳入对话中已经显示出生成的响应质量的改善。但是,不同类型的细粒句子功能的话语数量极为不平衡。除了少量的高资源句子功能外,很大一部分句子功能很少。因此,对话生成以这些频繁的句子功能为条件,遭受数据缺陷的影响。在本文中,我们研究了一种结构化的元学习方法(SML)方法,以在不经常的句子函数上生成对话。我们将对话生成以不同的句子函数为单独的任务,并将模型不合时宜的元学习应用于高资源句子函数数据。此外,SML通过促进不同句子功能之间的知识定制,但同时保留了相似句子功能的知识概括,从而增强了元学习效率。实验结果表明,SML不仅提高了生成的响应的信息性和相关性,而且还可以产生与目标句子函数一致的响应。

Sentence function is an important linguistic feature indicating the communicative purpose in uttering a sentence. Incorporating sentence functions into conversations has shown improvements in the quality of generated responses. However, the number of utterances for different types of fine-grained sentence functions is extremely imbalanced. Besides a small number of high-resource sentence functions, a large portion of sentence functions is infrequent. Consequently, dialogue generation conditioned on these infrequent sentence functions suffers from data deficiency. In this paper, we investigate a structured meta-learning (SML) approach for dialogue generation on infrequent sentence functions. We treat dialogue generation conditioned on different sentence functions as separate tasks, and apply model-agnostic meta-learning to high-resource sentence functions data. Furthermore, SML enhances meta-learning effectiveness by promoting knowledge customization among different sentence functions but simultaneously preserving knowledge generalization for similar sentence functions. Experimental results demonstrate that SML not only improves the informativeness and relevance of generated responses, but also can generate responses consistent with the target sentence functions.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源