论文标题

深厚的加强自我注意力掩膜用于抽象性摘要(SSAS)

Deep Reinforced Self-Attention Masks for Abstractive Summarization (DR.SAS)

论文作者

Chadha, Ankit, Masoud, Mohamed

论文摘要

我们提出了一种新颖的体系结构计划,以解决基于CNN/DMDATASET的抽象性摘要问题,该问题将加固学习(RL)与Unilm融合(RL),这是一种预先训练的深度学习模型,以解决各种自然语言任务。我们已经测试了在变压器中学习细粒度注意以提高总结质量的局限性。 Unilm以全球方式关注整个令牌空间。我们提出了应用参与者批评(AC)算法的SAS博士,以学习对代币的动态自我注意分布,以减少冗余并产生事实和相干的摘要,以提高摘要的质量。进行高参数调整后,与基线相比,我们达到了胭脂的结果。由于对Rouge奖励的优化,我们的模型往往更具提取性/事实性但详细的连贯性。我们提供了详细的错误分析,其中包括我们模型的优势和局限性的示例。我们的代码库将在我们的github上公开使用。

We present a novel architectural scheme to tackle the abstractive summarization problem based on the CNN/DMdataset which fuses Reinforcement Learning (RL) withUniLM, which is a pre-trained Deep Learning Model, to solve various natural language tasks. We have tested the limits of learning fine-grained attention in Transformers to improve the summarization quality. UniLM applies attention to the entire token space in a global fashion. We propose DR.SAS which applies the Actor-Critic (AC) algorithm to learn a dynamic self-attention distribution over the tokens to reduce redundancy and generate factual and coherent summaries to improve the quality of summarization. After performing hyperparameter tuning, we achievedbetter ROUGE results compared to the baseline. Our model tends to be more extractive/factual yet coherent in detail because of optimization over ROUGE rewards. We present detailed error analysis with examples of the strengths and limitations of our model. Our codebase will be publicly available on our GitHub.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源