论文标题

SwapText:基于图像的文本在场景中转移

SwapText: Image Based Texts Transfer in Scenes

论文作者

Yang, Qiangpeng, Jin, Hongsheng, Huang, Jun, Lin, Wei

论文摘要

在保留原始字体,颜色,尺寸和背景纹理的同时,在场景图像中交换文本是一项艰巨的任务,因为不同因素之间的复杂相互作用。在这项工作中,我们提出了SwapText,这是一个三阶段的框架,可以通过场景图像传输文本。首先,提出了一个新颖的文本交换网络,以替换前景图像中的文本标签。其次,学会了一个背景完成网络来重建背景图像。最后,生成的前景图像和背景图像用于通过Fusion网络生成单词图像。使用提出的框架,即使在严重的几何变形中,我们也可以操纵输入图像的文本。定性和定量结果在几个场景文本数据集上显示,包括常规和不规则文本数据集。我们进行了广泛的实验,以证明我们方法的有用性,例如基于图像的文本翻译,文本图像合成等。

Swapping text in scene images while preserving original fonts, colors, sizes and background textures is a challenging task due to the complex interplay between different factors. In this work, we present SwapText, a three-stage framework to transfer texts across scene images. First, a novel text swapping network is proposed to replace text labels only in the foreground image. Second, a background completion network is learned to reconstruct background images. Finally, the generated foreground image and background image are used to generate the word image by the fusion network. Using the proposing framework, we can manipulate the texts of the input images even with severe geometric distortion. Qualitative and quantitative results are presented on several scene text datasets, including regular and irregular text datasets. We conducted extensive experiments to prove the usefulness of our method such as image based text translation, text image synthesis, etc.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源