论文标题

文档级别参数提取的动态全局内存

Dynamic Global Memory for Document-level Argument Extraction

论文作者

Du, Xinya, Li, Sha, Ji, Heng

论文摘要

从新闻文章中提取事件的信息论点是信息提取的一个挑战性问题,这需要对每个文档的全球上下文理解。尽管有关文档级提取的最新工作已经超越了单句子,并且增加了端到端模型的跨句子推理能力,但它们仍然受到某些输入序列长度约束的限制,通常忽略事件之间的全局上下文。为了解决此问题,我们通过构建文档存储器存储来记录上下文事件信息,并利用它隐含,明确地帮助解码以后事件的参数,以记录上下文事件信息,从而引入了一个新的基于全局神经生成的框架,以用于文档级事件参数提取提取。经验结果表明,我们的框架的表现大大优于先验方法,并且使用约束的解码设计对对抗注释的示例更为强大。 (我们的代码和资源可在https://github.com/xinyadu/memory_docie上获得研究目的。

Extracting informative arguments of events from news articles is a challenging problem in information extraction, which requires a global contextual understanding of each document. While recent work on document-level extraction has gone beyond single-sentence and increased the cross-sentence inference capability of end-to-end models, they are still restricted by certain input sequence length constraints and usually ignore the global context between events. To tackle this issue, we introduce a new global neural generation-based framework for document-level event argument extraction by constructing a document memory store to record the contextual event information and leveraging it to implicitly and explicitly help with decoding of arguments for later events. Empirical results show that our framework outperforms prior methods substantially and it is more robust to adversarially annotated examples with our constrained decoding design. (Our code and resources are available at https://github.com/xinyadu/memory_docie for research purpose.)

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源