论文标题
参考,重复使用,减少:在视觉和对话上下文中生成后续参考
Refer, Reuse, Reduce: Generating Subsequent References in Visual and Conversational Contexts
论文作者
论文摘要
对话参与者经常在对话中反复提到实体或情况,这有助于其凝聚力。随后的参考文献利用了对话者积累的共同基础,因此具有几种有趣的特性,即它们往往是较短的,并且在以前的提及中有效。在本文中,我们通过视觉扎根的对话解决了第一和随后的参考文献。我们提出了一个生成模型,该模型会产生以视觉和对话环境为基础的引用。为了评估其输出的参考有效性,我们还实施了参考分辨率系统。我们的实验和分析表明,该模型比在对话环境中基于的模型产生更好,更有效的参考话语,并产生随后的参考文献,这些引用表现出类似于人类的语言模式。
Dialogue participants often refer to entities or situations repeatedly within a conversation, which contributes to its cohesiveness. Subsequent references exploit the common ground accumulated by the interlocutors and hence have several interesting properties, namely, they tend to be shorter and reuse expressions that were effective in previous mentions. In this paper, we tackle the generation of first and subsequent references in visually grounded dialogue. We propose a generation model that produces referring utterances grounded in both the visual and the conversational context. To assess the referring effectiveness of its output, we also implement a reference resolution system. Our experiments and analyses show that the model produces better, more effective referring utterances than a model not grounded in the dialogue context, and generates subsequent references that exhibit linguistic patterns akin to humans.