论文标题
迈向流程引导视频介绍的端到端框架
Towards An End-to-End Framework for Flow-Guided Video Inpainting
论文作者
论文摘要
光流通过沿轨迹传播像素来捕获跨帧的运动信息,可捕获跨帧的运动信息。但是,这些方法中的基于手工制作的基于流动的过程是分别应用的,以形成整个覆盖管道。因此,这些方法的效率较低,并且在很大程度上依赖于早期阶段的中间结果。在本文中,我们通过精心设计的三个可训练的模块(即流量完成,功能传播和内容幻觉模块)提出了一个用于流引导视频介绍(E $^2 $ FGVI)的端到端框架。这三个模块与以前基于流动的方法的三个阶段相对应,但可以共同优化,从而导致更有效的填充过程。实验结果表明,所提出的方法在定性和定量上优于最先进的方法,并显示出有希望的效率。该代码可在https://github.com/mcg-nku/e2fgvi上找到。
Optical flow, which captures motion information across frames, is exploited in recent video inpainting methods through propagating pixels along its trajectories. However, the hand-crafted flow-based processes in these methods are applied separately to form the whole inpainting pipeline. Thus, these methods are less efficient and rely heavily on the intermediate results from earlier stages. In this paper, we propose an End-to-End framework for Flow-Guided Video Inpainting (E$^2$FGVI) through elaborately designed three trainable modules, namely, flow completion, feature propagation, and content hallucination modules. The three modules correspond with the three stages of previous flow-based methods but can be jointly optimized, leading to a more efficient and effective inpainting process. Experimental results demonstrate that the proposed method outperforms state-of-the-art methods both qualitatively and quantitatively and shows promising efficiency. The code is available at https://github.com/MCG-NKU/E2FGVI.