论文标题

表示图像操纵及以后的表示分解

Representation Decomposition for Image Manipulation and Beyond

论文作者

Chen, Shang-Fu, Yan, Jia-Wei, Su, Ya-Fan, Wang, Yu-Chiang Frank

论文摘要

表示解散旨在学习可解释的特征,以便可以相应地恢复或操纵输出。尽管存在Infogan和Ac-Gan之类的现有作品,但他们选择得出特征分离的不相交属性代码,该代码不适用于现有/训练的生成模型。在本文中,我们提出了一个分解gan(DEC-GAN),能够将现有潜在表示形式分解为内容和属性特征。在预先培训的分类器的指导下,我们的DEC-GAN分解了潜在表示属性的属性,而数据恢复和功能一致性目标则可以实现我们提出的方法的学习。我们在多个图像数据集上进行的实验证实了DEC-GAN对最近的表示模型的有效性和鲁棒性。

Representation disentanglement aims at learning interpretable features, so that the output can be recovered or manipulated accordingly. While existing works like infoGAN and AC-GAN exist, they choose to derive disjoint attribute code for feature disentanglement, which is not applicable for existing/trained generative models. In this paper, we propose a decomposition-GAN (dec-GAN), which is able to achieve the decomposition of an existing latent representation into content and attribute features. Guided by the classifier pre-trained on the attributes of interest, our dec-GAN decomposes the attributes of interest from the latent representation, while data recovery and feature consistency objectives enforce the learning of our proposed method. Our experiments on multiple image datasets confirm the effectiveness and robustness of our dec-GAN over recent representation disentanglement models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源