论文标题
Mixgen:新的多模式数据增强
MixGen: A New Multi-Modal Data Augmentation
论文作者
论文摘要
数据增强是提高深度学习数据效率的必要条件。对于视觉预训练,仅对图像或以前的作品中的文本进行数据增强。在本文中,我们介绍了Mixgen:视觉表示的联合数据增强学习,以进一步提高数据效率。它生成了新的图像文本对,并通过插值图像和串联文本保留了语义关系。它很简单,可以插入现有管道中。我们在五个下游视觉语言任务中评估了四个架构,包括夹子,vilt,albef和tcl在内的混合带,以显示其多功能性和有效性。 For example, adding MixGen in ALBEF pre-training leads to absolute performance improvements on downstream tasks: image-text retrieval (+6.2% on COCO fine-tuned and +5.3% on Flicker30K zero-shot), visual grounding (+0.9% on RefCOCO+), visual reasoning (+$0.9% on NLVR2), visual question answering (+0.3% on VQA2.0), and视觉上的需要(SNLI-VE的+0.4%)。
Data augmentation is a necessity to enhance data efficiency in deep learning. For vision-language pre-training, data is only augmented either for images or for text in previous works. In this paper, we present MixGen: a joint data augmentation for vision-language representation learning to further improve data efficiency. It generates new image-text pairs with semantic relationships preserved by interpolating images and concatenating text. It's simple, and can be plug-and-played into existing pipelines. We evaluate MixGen on four architectures, including CLIP, ViLT, ALBEF and TCL, across five downstream vision-language tasks to show its versatility and effectiveness. For example, adding MixGen in ALBEF pre-training leads to absolute performance improvements on downstream tasks: image-text retrieval (+6.2% on COCO fine-tuned and +5.3% on Flicker30K zero-shot), visual grounding (+0.9% on RefCOCO+), visual reasoning (+$0.9% on NLVR2), visual question answering (+0.3% on VQA2.0), and visual entailment (+0.4% on SNLI-VE).