论文标题
修复破裂的基础:对生成文本的评估实践中的障碍调查
Repairing the Cracked Foundation: A Survey of Obstacles in Evaluation Practices for Generated Text
论文作者
论文摘要
自然语言产生(NLG)的评估实践具有许多已知缺陷,但是改进的评估方法很少被广泛采用。由于神经NLG模型已经改善到通常无法根据旧指标所依赖的表面级特征来区分的地步,因此此问题变得更加紧迫。本文调查了人类和自动模型评估以及NLG中常用数据集的问题,这些数据集已在过去20年中指出。我们总结,分类并讨论了研究人员如何解决这些问题以及他们的发现对当前模型评估的意义。在这些见解的基础上,我们为NLG评估提供了长期愿景,并提出了具体步骤,以改善其评估过程。最后,我们从最近的NLP会议中分析了66份NLG论文,这些论文已经遵循这些建议,并确定哪些领域需要对现状进行更大的变化。
Evaluation practices in natural language generation (NLG) have many known flaws, but improved evaluation approaches are rarely widely adopted. This issue has become more urgent, since neural NLG models have improved to the point where they can often no longer be distinguished based on the surface-level features that older metrics rely on. This paper surveys the issues with human and automatic model evaluations and with commonly used datasets in NLG that have been pointed out over the past 20 years. We summarize, categorize, and discuss how researchers have been addressing these issues and what their findings mean for the current state of model evaluations. Building on those insights, we lay out a long-term vision for NLG evaluation and propose concrete steps for researchers to improve their evaluation processes. Finally, we analyze 66 NLG papers from recent NLP conferences in how well they already follow these suggestions and identify which areas require more drastic changes to the status quo.