论文标题

长文件摘要,自上而下和自下而上推理

Long Document Summarization with Top-down and Bottom-up Inference

论文作者

Pang, Bo, Nijkamp, Erik, Kryściński, Wojciech, Savarese, Silvio, Zhou, Yingbo, Xiong, Caiming

论文摘要

文本摘要旨在凝结长文档并保留关键信息。对于摘要模型的成功至关重要的是源文档中文字或令牌的潜在表示。最新模型使用变压器编码器推断潜在表示,该代码纯粹是自下而上的。同样,基于自发的推理模型面临着二次复杂性相对于序列长度的挑战。我们提出了一个原则上的推理框架,以改善这两个方面的摘要模型。我们的框架假定文档的分层潜在结构,其中顶级以更粗的时间尺度捕获远距离依赖关系,而底部令牌级别保留了细节。至关重要的是,这种层次结构可以以自下而上和自上而下的方式更新令牌表示。在自下而上,通过本地自我发言来推断令牌表示,以利用其效率。然后对自上而下的校正进行应用,以允许令牌捕获长期依赖性。我们证明了拟议框架对各种摘要数据集的有效性,包括叙事,对话,科学文档和新闻。与完全注意力变形金刚相比,我们的模型在具有更高的内存和计算效率的短文档上实现了(1)与最近有效的变压器相比,在各种长文档摘要基准的竞争性表现上,以及(2)在广泛的长文档摘要基准上的最先进性能。我们还表明,与最近的基于GPT-3的模型相比,我们的模型可以总结整本书并通过$ 0.27 \%$ $参数(4.64亿与175B)和更少的培训数据来实现竞争性能。这些结果表明拟议框架的一般适用性和好处。

Text summarization aims to condense long documents and retain key information. Critical to the success of a summarization model is the faithful inference of latent representations of words or tokens in the source documents. Most recent models infer the latent representations with a transformer encoder, which is purely bottom-up. Also, self-attention-based inference models face the challenge of quadratic complexity with respect to sequence length. We propose a principled inference framework to improve summarization models on these two aspects. Our framework assumes a hierarchical latent structure of a document where the top-level captures the long range dependency at a coarser time scale and the bottom token level preserves the details. Critically, this hierarchical structure enables token representations to be updated in both a bottom-up and top-down manner. In the bottom-up pass, token representations are inferred with local self-attention to leverage its efficiency. Top-down correction is then applied to allow tokens to capture long-range dependency. We demonstrate the effectiveness of the proposed framework on a diverse set of summarization datasets, including narrative, conversational, scientific documents and news. Our model achieves (1) competitive or better performance on short documents with higher memory and compute efficiency, compared to full attention transformers, and (2) state-of-the-art performance on a wide range of long document summarization benchmarks, compared to recent efficient transformers. We also show that our model can summarize an entire book and achieve competitive performance using $0.27\%$ parameters (464M vs. 175B) and much less training data, compared to a recent GPT-3-based model. These results indicate the general applicability and benefits of the proposed framework.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源