论文标题
自我监督特定的肖像增强和产生
Self-supervised Matting-specific Portrait Enhancement and Generation
论文作者
论文摘要
我们从完全不同的角度解决了不足的α效果问题。给定输入肖像图像,我们不必估计相应的alpha哑光,而是专注于另一端,以巧妙地增强此输入,从而可以通过任何现有的均值模型轻松估算Alpha Matte。这是通过探索GAN模型的潜在空间来完成的。证明可以在潜在空间中找到可解释的方向,它们对应于语义图像变换。我们进一步探索了Alpha Matting中的此属性。特别是,我们将输入肖像倒入StyleGan的潜在代码中,我们的目的是发现潜在空间中是否有增强版本,该版本与参考垫模型更兼容。我们优化了在四个量身定制的损失下的潜在空间中的多尺度潜在向量,从而确保了肖像画上的平均值和微妙的修改。我们证明了所提出的方法可以为任意的底漆模型完善真实的肖像图像,从而使自动alpha matting的性能较大。此外,我们还利用了Stylegan的生成性能,并建议生成增强的肖像数据,可以将其视为伪GT。它解决了昂贵的Alpha Matte注释的问题,进一步增强了现有模型的垫子性能。代码可在〜\ url {https://github.com/cnnlstm/stylegan_matting}中获得。
We resolve the ill-posed alpha matting problem from a completely different perspective. Given an input portrait image, instead of estimating the corresponding alpha matte, we focus on the other end, to subtly enhance this input so that the alpha matte can be easily estimated by any existing matting models. This is accomplished by exploring the latent space of GAN models. It is demonstrated that interpretable directions can be found in the latent space and they correspond to semantic image transformations. We further explore this property in alpha matting. Particularly, we invert an input portrait into the latent code of StyleGAN, and our aim is to discover whether there is an enhanced version in the latent space which is more compatible with a reference matting model. We optimize multi-scale latent vectors in the latent spaces under four tailored losses, ensuring matting-specificity and subtle modifications on the portrait. We demonstrate that the proposed method can refine real portrait images for arbitrary matting models, boosting the performance of automatic alpha matting by a large margin. In addition, we leverage the generative property of StyleGAN, and propose to generate enhanced portrait data which can be treated as the pseudo GT. It addresses the problem of expensive alpha matte annotation, further augmenting the matting performance of existing models. Code is available at~\url{https://github.com/cnnlstm/StyleGAN_Matting}.