论文标题
图像插图的无偏多模式指南
Unbiased Multi-Modality Guidance for Image Inpainting
论文作者
论文摘要
图像介入是一个不适的问题,可以基于带有遮罩的不完整图像来恢复缺失或损坏的图像内容。以前的作品通常可以预测辅助结构(例如边缘,分割和轮廓),以帮助以多阶段的方式填充视觉逼真的斑块。但是,不精确的辅助先验可能会产生有偏见的成分结果。此外,对于复杂的神经网络的多个阶段来实现某些方法是耗时的。为了解决此问题,我们开发了一个端到端的多模式引导的变压器网络,包括一个镶嵌分支和两个用于语义分割和边缘纹理的辅助分支。在每个变压器块中,提出的多尺度空间感知注意模块可以通过辅助构成归一化有效地学习多模式结构特征。与以前依赖于偏见先验的直接指导的方法不同,我们的方法基于来自多种模式的判别性相互作用信息,在图像中具有语义一致的上下文。关于几个具有挑战性的图像介绍数据集的全面实验表明,我们的方法实现了最先进的性能,以有效地处理各种常规/不规则面具。
Image inpainting is an ill-posed problem to recover missing or damaged image content based on incomplete images with masks. Previous works usually predict the auxiliary structures (e.g., edges, segmentation and contours) to help fill visually realistic patches in a multi-stage fashion. However, imprecise auxiliary priors may yield biased inpainted results. Besides, it is time-consuming for some methods to be implemented by multiple stages of complex neural networks. To solve this issue, we develop an end-to-end multi-modality guided transformer network, including one inpainting branch and two auxiliary branches for semantic segmentation and edge textures. Within each transformer block, the proposed multi-scale spatial-aware attention module can learn the multi-modal structural features efficiently via auxiliary denormalization. Different from previous methods relying on direct guidance from biased priors, our method enriches semantically consistent context in an image based on discriminative interplay information from multiple modalities. Comprehensive experiments on several challenging image inpainting datasets show that our method achieves state-of-the-art performance to deal with various regular/irregular masks efficiently.