论文标题
将上下文融合到知识图中,以回答问题
Fusing Context Into Knowledge Graph for Commonsense Question Answering
论文作者
论文摘要
常识性问题回答(QA)需要一个模型来掌握常识和事实知识,以回答有关世界事件的问题。许多先前的方法将语言建模与知识图(kg)融为一体。但是,尽管公园包含丰富的结构信息,但它缺乏对概念的更精确理解的背景。当将知识图融合到语言建模中时,这会产生差距,尤其是在标记数据不足的情况下。因此,我们建议采用外部实体描述来提供上下文信息以了解知识的理解。我们从Wiktionary检索了相关概念的描述,并将其作为预训练的语言模型的附加输入。由此产生的模型实现了最先进的结果,从而导致了CommonSenseQA数据集,并且在OpenBookQa中非生成模型中的最佳结果。
Commonsense question answering (QA) requires a model to grasp commonsense and factual knowledge to answer questions about world events. Many prior methods couple language modeling with knowledge graphs (KG). However, although a KG contains rich structural information, it lacks the context to provide a more precise understanding of the concepts. This creates a gap when fusing knowledge graphs into language modeling, especially when there is insufficient labeled data. Thus, we propose to employ external entity descriptions to provide contextual information for knowledge understanding. We retrieve descriptions of related concepts from Wiktionary and feed them as additional input to pre-trained language models. The resulting model achieves state-of-the-art result in the CommonsenseQA dataset and the best result among non-generative models in OpenBookQA.