论文标题

通过引用灵活的提取和信用意识来改善多文件摘要

Improving Multi-Document Summarization through Referenced Flexible Extraction with Credit-Awareness

论文作者

Song, Yun-Zhu, Chen, Yi-Syuan, Shuai, Hong-Han

论文摘要

多文档摘要(MDS)中的一个显着挑战是输入的长度。在本文中,我们提出了一个提取物,然后提取变压器框架以克服问题。具体而言,我们利用预训练的语言模型来构建一个分层提取器,以跨文档进行显着句子选择,而摘要器将所选内容重写为摘要。但是,学习这样的框架是具有挑战性的,因为抽象器的最佳内容通常未知。以前的工作通常会创建伪提取甲骨文,以启用提取器和抽象器的监督学习。然而,我们认为,由于培训和测试之间的预测信息不足和目标不一致,因此可以限制此类方法的性能。为此,我们提出了一种减少的加权机制,该机制使模型意识到对不在伪提取甲骨文中的句子的不平等重要性,并利用微型抽象器来生成摘要参考作为学习提取器的辅助信号。此外,我们提出了一种增强学习方法,该方法可以有效地应用于提取器,以协调训练和测试之间的优化。实验结果表明,我们的框架大大优于强大的基线,并在多新的,多XSCIENCE和WIKICATSUM COLPORA上取得了最佳结果。

A notable challenge in Multi-Document Summarization (MDS) is the extremely-long length of the input. In this paper, we present an extract-then-abstract Transformer framework to overcome the problem. Specifically, we leverage pre-trained language models to construct a hierarchical extractor for salient sentence selection across documents and an abstractor for rewriting the selected contents as summaries. However, learning such a framework is challenging since the optimal contents for the abstractor are generally unknown. Previous works typically create pseudo extraction oracle to enable the supervised learning for both the extractor and the abstractor. Nevertheless, we argue that the performance of such methods could be restricted due to the insufficient information for prediction and inconsistent objectives between training and testing. To this end, we propose a loss weighting mechanism that makes the model aware of the unequal importance for the sentences not in the pseudo extraction oracle, and leverage the fine-tuned abstractor to generate summary references as auxiliary signals for learning the extractor. Moreover, we propose a reinforcement learning method that can efficiently apply to the extractor for harmonizing the optimization between training and testing. Experiment results show that our framework substantially outperforms strong baselines with comparable model sizes and achieves the best results on the Multi-News, Multi-XScience, and WikiCatSum corpora.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源