论文标题

通过预训练的stylegan2网络无监督的图像到图像翻译

Unsupervised Image-to-Image Translation via Pre-trained StyleGAN2 Network

论文作者

Huang, Jialu, Liao, Jing, Kwong, Sam

论文摘要

图像到图像(I2i)翻译是学术界的一个加热主题,它也已在现实世界中用于图像合成,超分辨率和着色等任务。但是,传统的I2i翻译方法一起在两个或多个域中训练数据。这需要大量的计算资源。此外,结果的质量较低,并且包含更多的文物。当不同域中的数据不平衡时,训练过程可能会不稳定,而模态崩溃更有可能发生。我们提出了一种新的I2I翻译方法,该方法通过在源域中的预训练的stylegan2模型上通过一系列模型转换在目标域中生成新模型。之后,我们提出了一种反转方法,以实现图像与其潜在向量之间的转换。通过将潜在矢量馈入生成的模型,我们可以在源域和目标域之间执行I2I翻译。进行定性和定量评估,以证明所提出的方法可以在与最先进的作品相比与输入和参考图像相似的图像质量,多样性和语义相似性方面取得出色的性能。

Image-to-Image (I2I) translation is a heated topic in academia, and it also has been applied in real-world industry for tasks like image synthesis, super-resolution, and colorization. However, traditional I2I translation methods train data in two or more domains together. This requires lots of computation resources. Moreover, the results are of lower quality, and they contain many more artifacts. The training process could be unstable when the data in different domains are not balanced, and modal collapse is more likely to happen. We proposed a new I2I translation method that generates a new model in the target domain via a series of model transformations on a pre-trained StyleGAN2 model in the source domain. After that, we proposed an inversion method to achieve the conversion between an image and its latent vector. By feeding the latent vector into the generated model, we can perform I2I translation between the source domain and target domain. Both qualitative and quantitative evaluations were conducted to prove that the proposed method can achieve outstanding performance in terms of image quality, diversity and semantic similarity to the input and reference images compared to state-of-the-art works.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源