论文标题

comer:基于变压器的手写数学表达识别的建模覆盖范围

CoMER: Modeling Coverage for Transformer-based Handwritten Mathematical Expression Recognition

论文作者

Zhao, Wenqi, Gao, Liangcai

论文摘要

基于变压器的编码器架构最近在识别手写数学表达式方面已取得了重大进步。但是,变压器模型仍然缺乏覆盖范围问题,使其表达识别率(删除)不如其RNN对应物。覆盖范围信息记录了过去步骤的一致性信息,已证明在RNN模型中有效。在本文中,我们提出了Comer,该模型采用了变压器解码器中的覆盖范围信息。具体而言,我们提出了一个新颖的注意力改进模块(ARM),以通过过去的对齐信息不损害其并行性来完善注意力的权重。此外,我们通过提出自覆盖和交叉覆盖的覆盖范围信息,从而利用当前和以前的层中的过去对齐信息。实验表明,与当前的最新型号相比,Comer将其提高了0.61%/2.09%/1.59%,并且在Crohme 2014/2016/2019测试集上达到59.33%/59.81%/62.97%。

The Transformer-based encoder-decoder architecture has recently made significant advances in recognizing handwritten mathematical expressions. However, the transformer model still suffers from the lack of coverage problem, making its expression recognition rate (ExpRate) inferior to its RNN counterpart. Coverage information, which records the alignment information of the past steps, has proven effective in the RNN models. In this paper, we propose CoMER, a model that adopts the coverage information in the transformer decoder. Specifically, we propose a novel Attention Refinement Module (ARM) to refine the attention weights with past alignment information without hurting its parallelism. Furthermore, we take coverage information to the extreme by proposing self-coverage and cross-coverage, which utilize the past alignment information from the current and previous layers. Experiments show that CoMER improves the ExpRate by 0.61%/2.09%/1.59% compared to the current state-of-the-art model, and reaches 59.33%/59.81%/62.97% on the CROHME 2014/2016/2019 test sets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源