论文标题

BLEU邻居:一种无参考的自动评估方法

BLEU Neighbors: A Reference-less Approach to Automatic Evaluation

论文作者

Ethayarajh, Kawin, Sadigh, Dorsa

论文摘要

评估是自然语言产生(NLG)模型发展的瓶颈。诸如BLEU之类的自动指标依赖参考,但是对于诸如开放式生成之类的任务,没有参考可以借鉴。尽管可以使用统计措施(例如困惑)来估算语言多样性,但测量语言质量需要人类评估。但是,由于人类的大规模评估缓慢且昂贵,因此很少使用。它不能用来快速迭代NLG模型,就像BLEU用于机器翻译一样。为此,我们提出了BLEU Neighbors,这是一个最近的邻居模型,用于通过使用BLEU分数作为内核函数来估算语言质量。在现有的Chitchat对话和开放式句子生成的数据集中,我们发现 - 平均而言,与单个人类注释者相比,BLEU邻居模型的质量估计具有较低的平均平方误差和更高的Spearman与地面真理的相关性。尽管它很简单,但BLEU邻居甚至在自动分级文章上都优于最先进的模型,包括可以访问金标准参考文章的模型。

Evaluation is a bottleneck in the development of natural language generation (NLG) models. Automatic metrics such as BLEU rely on references, but for tasks such as open-ended generation, there are no references to draw upon. Although language diversity can be estimated using statistical measures such as perplexity, measuring language quality requires human evaluation. However, because human evaluation at scale is slow and expensive, it is used sparingly; it cannot be used to rapidly iterate on NLG models, in the way BLEU is used for machine translation. To this end, we propose BLEU Neighbors, a nearest neighbors model for estimating language quality by using the BLEU score as a kernel function. On existing datasets for chitchat dialogue and open-ended sentence generation, we find that -- on average -- the quality estimation from a BLEU Neighbors model has a lower mean squared error and higher Spearman correlation with the ground truth than individual human annotators. Despite its simplicity, BLEU Neighbors even outperforms state-of-the-art models on automatically grading essays, including models that have access to a gold-standard reference essay.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源