论文标题

关系记忆增强语言模型

Relational Memory Augmented Language Models

论文作者

Liu, Qi, Yogatama, Dani, Blunsom, Phil

论文摘要

我们提出了一种以内存调节的方式来调节知识图上自回归语言模型的方法。我们将图表示为关系三元组的集合,并在给定上下文中检索相关关系以改善文本生成。 Wikitext-103,WMT19和Enwik8英语数据集的实验表明,我们的方法在每个字符的困惑和碎片方面产生了更好的语言模型。我们还表明,关系内存可以提高连贯性,与基于令牌的内存互补,并且可以进行因果干预。我们的模型提供了一种简单而有效的方法,可以将自回归语言模型与知识图相结合,以使其具有更连贯和逻辑的一代。

We present a memory-augmented approach to condition an autoregressive language model on a knowledge graph. We represent the graph as a collection of relation triples and retrieve relevant relations for a given context to improve text generation. Experiments on WikiText-103, WMT19, and enwik8 English datasets demonstrate that our approach produces a better language model in terms of perplexity and bits per character. We also show that relational memory improves coherence, is complementary to token-based memory, and enables causal interventions. Our model provides a simple yet effective way to combine an autoregressive language model with a knowledge graph for a more coherent and logical generation.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源