论文标题

与生成变压器的对比三重提取

Contrastive Triple Extraction with Generative Transformer

论文作者

Ye, Hongbin, Zhang, Ningyu, Deng, Shumin, Chen, Mosha, Tan, Chuanqi, Huang, Fei, Chen, Huajun

论文摘要

三重提取是自然语言处理和知识图构建信息提取的重要任务。在本文中,我们重新审视了序列生成的端到端三重提取任务。由于生成三重提取可能难以捕获长期依赖性并产生不忠的三元组,因此我们引入了一种新型模型,即与生成变压器的对比度三重提取。具体而言,我们为基于编码器的生成引入了一个共享的变压器模块。为了产生忠实的结果,我们提出了一个新颖的三胞胎对比训练对象。此外,我们引入了两种机制,以进一步提高模型性能(即,批处理动态注意力掩盖和三个方面的校准)。在三个数据集(即NYT,WebNLG和MIE)上进行的实验结果表明,我们的方法比基线的方法更好。

Triple extraction is an essential task in information extraction for natural language processing and knowledge graph construction. In this paper, we revisit the end-to-end triple extraction task for sequence generation. Since generative triple extraction may struggle to capture long-term dependencies and generate unfaithful triples, we introduce a novel model, contrastive triple extraction with a generative transformer. Specifically, we introduce a single shared transformer module for encoder-decoder-based generation. To generate faithful results, we propose a novel triplet contrastive training object. Moreover, we introduce two mechanisms to further improve model performance (i.e., batch-wise dynamic attention-masking and triple-wise calibration). Experimental results on three datasets (i.e., NYT, WebNLG, and MIE) show that our approach achieves better performance than that of baselines.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源