论文标题
针对基于深度学习的自然语言处理中类失衡的方法的调查
A Survey of Methods for Addressing Class Imbalance in Deep-Learning Based Natural Language Processing
论文作者
论文摘要
许多自然语言处理(NLP)任务自然失衡,因为某些目标类别的发生频率要比现实世界中的其他类别更频繁。在这种情况下,当前的NLP模型在较不频繁的类别上仍然往往差得很差。在NLP中解决阶级不平衡是一个积极的研究主题,但是,很难找到针对特定任务和不平衡情况的好方法。 通过这项调查,我们在基于深度学习的NLP中的阶级失衡的第一个概述,我们为处理不平衡数据的NLP研究人员和从业人员提供了指导。我们首先讨论各种各样的受控和现实类别的失衡。然后,我们的调查涵盖了已明确提出的用于类失去平衡的NLP任务的方法,或者是源自计算机视觉社区的方法。我们通过它们是基于采样,数据增强,损失功能的选择,分阶段学习还是模型设计来组织这些方法。最后,我们讨论了诸如处理多标签情景之类的开放问题,并提出了系统的基准测试和报告,以便将这个问题作为一个社区发展。
Many natural language processing (NLP) tasks are naturally imbalanced, as some target categories occur much more frequently than others in the real world. In such scenarios, current NLP models still tend to perform poorly on less frequent classes. Addressing class imbalance in NLP is an active research topic, yet, finding a good approach for a particular task and imbalance scenario is difficult. With this survey, the first overview on class imbalance in deep-learning based NLP, we provide guidance for NLP researchers and practitioners dealing with imbalanced data. We first discuss various types of controlled and real-world class imbalance. Our survey then covers approaches that have been explicitly proposed for class-imbalanced NLP tasks or, originating in the computer vision community, have been evaluated on them. We organize the methods by whether they are based on sampling, data augmentation, choice of loss function, staged learning, or model design. Finally, we discuss open problems such as dealing with multi-label scenarios, and propose systematic benchmarking and reporting in order to move forward on this problem as a community.