论文标题
kglm:在语言模型中集成知识图结构的链接预测
KGLM: Integrating Knowledge Graph Structure in Language Models for Link Prediction
论文作者
论文摘要
知识图表代表复杂关系的能力导致他们满足了各种需求,包括知识表示,提问和推荐系统。知识图通常在其代表的信息中不完整,因此需要了解知识图完成任务。尽管这些模型忽略了知识图中编码的内在信息,即实体和关系类型,但预先训练和微调的语言模型已经在这些任务中显示出了希望。在这项工作中,我们提出了知识图语言模型(kglm)体系结构,在其中介绍了一个新的实体/关系嵌入层,该层学会学会区分独特的实体和关系类型,因此允许该模型学习知识图的结构。在这项工作中,我们表明,使用从知识图中提取的三元组提取的三个嵌入式层进一步预训练语言模型,然后是标准的微调阶段为基准数据集中的链接预测任务设置了新的最新性能。
The ability of knowledge graphs to represent complex relationships at scale has led to their adoption for various needs including knowledge representation, question-answering, and recommendation systems. Knowledge graphs are often incomplete in the information they represent, necessitating the need for knowledge graph completion tasks. Pre-trained and fine-tuned language models have shown promise in these tasks although these models ignore the intrinsic information encoded in the knowledge graph, namely the entity and relation types. In this work, we propose the Knowledge Graph Language Model (KGLM) architecture, where we introduce a new entity/relation embedding layer that learns to differentiate distinctive entity and relation types, therefore allowing the model to learn the structure of the knowledge graph. In this work, we show that further pre-training the language models with this additional embedding layer using the triples extracted from the knowledge graph, followed by the standard fine-tuning phase sets a new state-of-the-art performance for the link prediction task on the benchmark datasets.