论文标题

通过平行生成对抗网络,以自我为中心的形象生成

Exocentric to Egocentric Image Generation via Parallel Generative Adversarial Network

论文作者

Liu, Gaowen, Tang, Hao, Latapie, Hugo, Yan, Yan

论文摘要

最近已经提出了跨视图图像生成,以从另一个截然不同的视图中生成一个视图的图像。在本文中,我们研究了以自我为中心的(第三人称)观点(以自我为中心的(第一人称)查看图像生成。这是一项具有挑战性的任务,因为以自我为中心的观点有时与外向观点有很大不同。因此,转换两种视图的外观是一项非平凡的任务。为此,我们提出了一种新型的平行生成对抗网络(P-GAN),并具有新颖的交叉周期损失,以学习从外向视图中生成以自我为中心图像的共享信息。我们还将新颖的上下文特征丢失纳入学习过程中,以捕获图像中的上下文信息。 EXO-EGO数据集的大量实验表明,我们的模型的表现优于最先进的方法。

Cross-view image generation has been recently proposed to generate images of one view from another dramatically different view. In this paper, we investigate exocentric (third-person) view to egocentric (first-person) view image generation. This is a challenging task since egocentric view sometimes is remarkably different from exocentric view. Thus, transforming the appearances across the two views is a non-trivial task. To this end, we propose a novel Parallel Generative Adversarial Network (P-GAN) with a novel cross-cycle loss to learn the shared information for generating egocentric images from exocentric view. We also incorporate a novel contextual feature loss in the learning procedure to capture the contextual information in images. Extensive experiments on the Exo-Ego datasets show that our model outperforms the state-of-the-art approaches.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源