论文标题

ImageArg:用于图像说服性挖掘的多模式推文数据集

ImageArg: A Multi-modal Tweet Dataset for Image Persuasiveness Mining

论文作者

Liu, Zhexiong, Guo, Meiqi, Dai, Yue, Litman, Diane

论文摘要

对发展有说服力的文本的兴趣日益增加,促进了自动化系统中的应用程序,例如辩论和论文评分系统;但是,从辩论性的角度来看,先前的工作挖掘图像说服力几乎没有。为了将说服力开采扩展到多模式领域,我们提出了一个多模式数据集,ImageArg,由推文中图像说服力的注释组成。注释是基于我们开发的说服分类法来探索图像功能和说服力的手段。我们使用广泛使用的多模式学习方法在Imakearg上基于图像说服力。实验结果表明,我们的数据集为这个丰富而充满挑战的主题提供了有用的资源,并且有足够的空间来建模改进。

The growing interest in developing corpora of persuasive texts has promoted applications in automated systems, e.g., debating and essay scoring systems; however, there is little prior work mining image persuasiveness from an argumentative perspective. To expand persuasiveness mining into a multi-modal realm, we present a multi-modal dataset, ImageArg, consisting of annotations of image persuasiveness in tweets. The annotations are based on a persuasion taxonomy we developed to explore image functionalities and the means of persuasion. We benchmark image persuasiveness tasks on ImageArg using widely-used multi-modal learning methods. The experimental results show that our dataset offers a useful resource for this rich and challenging topic, and there is ample room for modeling improvement.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源