论文标题

图像文本匹配和检索的变压器推理网络

Transformer Reasoning Network for Image-Text Matching and Retrieval

论文作者

Messina, Nicola, Falchi, Fabrizio, Esuli, Andrea, Amato, Giuseppe

论文摘要

在现代AI研究中,图像文本匹配是一项有趣而有趣的任务。尽管基于深度学习的图像和文本处理系统的发展,但多模式匹配仍然是一个具有挑战性的问题。在这项工作中,我们考虑了准确的图像文本匹配的问题,以实现多模式大规模信息检索的任务。图像文本匹配的最新结果是通过相互播放的图像和两个不同处理管道的文本特征(通常使用相互注意机制)来实现的。但是,这使得在大规模检索系统中提取索引步骤所需的单独的视觉和文本功能的任何机会无效。在这方面,我们介绍了变压器编码器推理网络(TERN),这是一种建立在现代关系感知的自我启发性架构的架构,即变压器编码器(TE)。该体系结构能够分别推荐这两种不同的方式,并通过共享更深的变压器层的权重来执行最终的共同抽象概念空间。借助此设计,实施的网络能够生成可用于连续索引步骤的紧凑而非常丰富的视觉和文本功能。实验是在MS-COCO数据集上进行的,我们使用折现累积增益度量标准和相关性利用字幕相似性来评估结果,以评估可能非脱颖而出但相关的搜索结果。我们证明,在此指标上,我们能够在图像检索任务中实现最新的结果。我们的代码可在https://github.com/mesnico/tern上免费获得。

Image-text matching is an interesting and fascinating task in modern AI research. Despite the evolution of deep-learning-based image and text processing systems, multi-modal matching remains a challenging problem. In this work, we consider the problem of accurate image-text matching for the task of multi-modal large-scale information retrieval. State-of-the-art results in image-text matching are achieved by inter-playing image and text features from the two different processing pipelines, usually using mutual attention mechanisms. However, this invalidates any chance to extract separate visual and textual features needed for later indexing steps in large-scale retrieval systems. In this regard, we introduce the Transformer Encoder Reasoning Network (TERN), an architecture built upon one of the modern relationship-aware self-attentive architectures, the Transformer Encoder (TE). This architecture is able to separately reason on the two different modalities and to enforce a final common abstract concept space by sharing the weights of the deeper transformer layers. Thanks to this design, the implemented network is able to produce compact and very rich visual and textual features available for the successive indexing step. Experiments are conducted on the MS-COCO dataset, and we evaluate the results using a discounted cumulative gain metric with relevance computed exploiting caption similarities, in order to assess possibly non-exact but relevant search results. We demonstrate that on this metric we are able to achieve state-of-the-art results in the image retrieval task. Our code is freely available at https://github.com/mesnico/TERN.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源