论文标题
改善视觉问题回答中的跨语性概括
Improving the Cross-Lingual Generalisation in Visual Question Answering
论文作者
论文摘要
尽管对多语言视觉识别的模型实现了一些好处,但是当将多拉语预训练的视觉模型应用于非英语数据时,跨多种任务和语言的最新基准测试表明,跨语性概括不佳,在(有监督的)英语表现和(零镜头)的跨语言转移之间存在较大差距。在这项工作中,我们探索了这些模型在零拍的跨语性视觉响应(VQA)任务上的糟糕性能,其中模型在英语视觉问题数据上进行了微调,并对7种类型上多样的语言进行了评估。我们通过三种策略提高了跨语性转移:(1)我们引入了语言的先验目标,以增加基于相似性的损失,并在培训期间指导模型,以指导模型,(2)我们学习了一个特定于任务的子网络,改善了跨语言概括并减少不修改模型的培训范围的跨性别培训范围,我们使用构建范围促进综合代码的范围,以促进综合培训范围。我们使用预审计的多语言多模式变压器UC2和M3P进行的XGQA实验证明了针对7种语言提出的微调策略的一致性,以稀疏模型表现优于现有的转移方法。复制我们发现的代码和数据已公开可用。
While several benefits were realized for multilingual vision-language pretrained models, recent benchmarks across various tasks and languages showed poor cross-lingual generalisation when multilingually pre-trained vision-language models are applied to non-English data, with a large gap between (supervised) English performance and (zero-shot) cross-lingual transfer. In this work, we explore the poor performance of these models on a zero-shot cross-lingual visual question answering (VQA) task, where models are fine-tuned on English visual-question data and evaluated on 7 typologically diverse languages. We improve cross-lingual transfer with three strategies: (1) we introduce a linguistic prior objective to augment the cross-entropy loss with a similarity-based loss to guide the model during training, (2) we learn a task-specific subnetwork that improves cross-lingual generalisation and reduces variance without model modification, (3) we augment training examples using synthetic code-mixing to promote alignment of embeddings between source and target languages. Our experiments on xGQA using the pretrained multilingual multimodal transformers UC2 and M3P demonstrate the consistent effectiveness of the proposed fine-tuning strategy for 7 languages, outperforming existing transfer methods with sparse models. Code and data to reproduce our findings are publicly available.