论文标题

X-LXMERT:使用多模式变压器的油漆,标题和回答问题

X-LXMERT: Paint, Caption and Answer Questions with Multi-Modal Transformers

论文作者

Cho, Jaemin, Lu, Jiasen, Schwenk, Dustin, Hajishirzi, Hannaneh, Kembhavi, Aniruddha

论文摘要

反映了掩盖语言模型的成功,Vilbert,LXMERT和UNITER等视觉和语言对应物在各种多模式判别任务上取得了最先进的表现,例如视觉问答答案和视觉接地。最近的工作还成功地将这种模型适应了图像字幕的生成任务。这就提出了一个问题:这些模型是否可以通过另一种方式来生成文本的图像?我们对该模型家族的流行代表分析-LXMERT-发现它无法通过当前的培训设置来产生富裕和语义上有意义的图像。我们介绍了X-LXMERT,这是LXMERT的扩展,其中包括:离散的视觉表示,使用均匀掩蔽屏蔽,并将正确的预训练数据集对齐到正确的目标,从而使其能够绘画。 X-LXMERT的图像生成能力可竞争艺术生成模型的竞争状态,而其问题的回答和字幕能力仍可与LXMERT相提并论。最后,我们通过将图像生成功能添加到Uniter中以生成X Uniter来证明这些训练改进的通用性。

Mirroring the success of masked language models, vision-and-language counterparts like ViLBERT, LXMERT and UNITER have achieved state of the art performance on a variety of multimodal discriminative tasks like visual question answering and visual grounding. Recent work has also successfully adapted such models towards the generative task of image captioning. This begs the question: Can these models go the other way and generate images from pieces of text? Our analysis of a popular representative from this model family - LXMERT - finds that it is unable to generate rich and semantically meaningful imagery with its current training setup. We introduce X-LXMERT, an extension to LXMERT with training refinements including: discretizing visual representations, using uniform masking with a large range of masking ratios and aligning the right pre-training datasets to the right objectives which enables it to paint. X-LXMERT's image generation capabilities rival state of the art generative models while its question answering and captioning abilities remains comparable to LXMERT. Finally, we demonstrate the generality of these training refinements by adding image generation capabilities into UNITER to produce X-UNITER.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源