论文标题
跨视图全景图像合成
Cross-View Panorama Image Synthesis
论文作者
论文摘要
在本文中,我们解决了在顶级航空图像上综合地面视图全景图像的问题,这是一个具有挑战性的问题,这是一个具有挑战性的问题,这是两个具有不同视图的图像域之间的差距。我们没有在Feelforward Pass中学习跨视图映射,而是提出了一个新颖的对抗反馈GAN框架,名为Panogan具有两个关键组成部分:一个对抗反馈模块和双分支歧视策略。首先,空中图像被馈入发电机,以产生目标全景图像及其相关的分割图,以使用布局语义训练模型训练。其次,由我们的对抗反馈模块编码的鉴别器的功能响应被馈回发电机以完善中间表示形式,以便通过迭代生成过程不断提高生成性能。第三,为了追求生成的全景图像的高保真性和语义一致性,我们提出了在双分支区分策略下的像素分割对准机制,以促进生成器与歧视器之间的合作。在两个具有挑战性的跨视图数据集上进行的广泛实验结果表明,PatoGan可实现高质量的全景图像生成,其细节比最先进的方法更具说服力。源代码和训练有素的模型可在\ url {https://github.com/sswuai/panogan}上获得。
In this paper, we tackle the problem of synthesizing a ground-view panorama image conditioned on a top-view aerial image, which is a challenging problem due to the large gap between the two image domains with different view-points. Instead of learning cross-view mapping in a feedforward pass, we propose a novel adversarial feedback GAN framework named PanoGAN with two key components: an adversarial feedback module and a dual branch discrimination strategy. First, the aerial image is fed into the generator to produce a target panorama image and its associated segmentation map in favor of model training with layout semantics. Second, the feature responses of the discriminator encoded by our adversarial feedback module are fed back to the generator to refine the intermediate representations, so that the generation performance is continually improved through an iterative generation process. Third, to pursue high-fidelity and semantic consistency of the generated panorama image, we propose a pixel-segmentation alignment mechanism under the dual branch discrimiantion strategy to facilitate cooperation between the generator and the discriminator. Extensive experimental results on two challenging cross-view image datasets show that PanoGAN enables high-quality panorama image generation with more convincing details than state-of-the-art approaches. The source code and trained models are available at \url{https://github.com/sswuai/PanoGAN}.