论文标题
优点:逻辑推理的元路径指导对比度学习
MERIt: Meta-Path Guided Contrastive Learning for Logical Reasoning
论文作者
论文摘要
逻辑推理对于自然语言理解至关重要。先前的研究要么采用基于图的模型来纳入有关逻辑关系的先验知识,要么通过数据扩展将符号逻辑引入神经模型中。但是,这些方法在很大程度上取决于注释的培训数据,因此由于数据集稀疏性而遭受了过度拟合和不良的泛化问题。为了解决这两个问题,在本文中,我们提出了优点,这是一种用于文本逻辑推理的元路径指导的对比度学习方法,以对丰富的未标记文本数据进行自我监督的预训练。两种新型策略是我们方法的必不可少的组成部分。特别是,设计了一种基于元路径的策略,以发现自然文本中的逻辑结构,然后采取反事实数据增强策略,以消除预训练引起的快捷方式。实验结果对两个具有挑战性的逻辑推理基准,即Reclor和Logiqa,表明我们的方法的表现优于SOTA基准,具有显着的改进。
Logical reasoning is of vital importance to natural language understanding. Previous studies either employ graph-based models to incorporate prior knowledge about logical relations, or introduce symbolic logic into neural models through data augmentation. These methods, however, heavily depend on annotated training data, and thus suffer from over-fitting and poor generalization problems due to the dataset sparsity. To address these two problems, in this paper, we propose MERIt, a MEta-path guided contrastive learning method for logical ReasonIng of text, to perform self-supervised pre-training on abundant unlabeled text data. Two novel strategies serve as indispensable components of our method. In particular, a strategy based on meta-path is devised to discover the logical structure in natural texts, followed by a counterfactual data augmentation strategy to eliminate the information shortcut induced by pre-training. The experimental results on two challenging logical reasoning benchmarks, i.e., ReClor and LogiQA, demonstrate that our method outperforms the SOTA baselines with significant improvements.