论文标题

Olala:有效文档布局注释的对象级别的主动学习

OLALA: Object-Level Active Learning for Efficient Document Layout Annotation

论文作者

Shen, Zejiang, Zhao, Jian, Dell, Melissa, Yu, Yaoliang, Li, Weining

论文摘要

文档图像通常具有复杂的布局结构,每个页面上都有许多内容区域(例如文本,图形,表格)。这使布局数据集的手动注释昂贵且效率低下。这些特征还挑战了现有的主动学习方法,因为图像水平的评分和选择遭受了共同对象的过度暴露。我们在半监督学习和自我训练的最新进展中引起了人们的影响,我们提出了一个对象级的主动学习框架,以实现有效的文档布局注释,Olala。在此框架中,只选择图像中最模棱两可的对象预测的区域才能为注释者标记标签,从而优化注释预算的使用。对于未选择的预测,提出了半自动校正算法,以根据布局结构的先验知识来识别某些错误,并通过较小的监督对其进行纠正。此外,我们仔细为文档图像设计了基于扰动的对象评分功能。它通过评估预测歧义来控制对象选择过程,并考虑预测的布局对象的位置和类别。广泛的实验表明,鉴于相同的标签预算,奥拉拉可以显着提高模型性能并提高注释效率。可以通过https://github.com/lolipopshock/detectron2_al访问本文的代码。

Document images often have intricate layout structures, with numerous content regions (e.g. texts, figures, tables) densely arranged on each page. This makes the manual annotation of layout datasets expensive and inefficient. These characteristics also challenge existing active learning methods, as image-level scoring and selection suffer from the overexposure of common objects.Inspired by recent progresses in semi-supervised learning and self-training, we propose an Object-Level Active Learning framework for efficient document layout Annotation, OLALA. In this framework, only regions with the most ambiguous object predictions within an image are selected for annotators to label, optimizing the use of the annotation budget. For unselected predictions, the semi-automatic correction algorithm is proposed to identify certain errors based on prior knowledge of layout structures and rectifies them with minor supervision. Additionally, we carefully design a perturbation-based object scoring function for document images. It governs the object selection process via evaluating prediction ambiguities, and considers both the positions and categories of predicted layout objects. Extensive experiments show that OLALA can significantly boost model performance and improve annotation efficiency, given the same labeling budget. Code for this paper can be accessed via https://github.com/lolipopshock/detectron2_al.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源