论文标题

端到端完整投影仪补偿

End-to-end Full Projector Compensation

论文作者

Huang, Bingyao, Sun, Tao, Ling, Haibin

论文摘要

完整的投影仪补偿旨在修改投影仪输入图像,以补偿投影表面的几何和光度障碍。传统方法通常分别解决这两个部分,并可能患有次优的解决方案。在本文中,我们提出了第一个名为Compennest ++的端到端可区分解决方案,以共同解决这两个问题。首先,我们提出了一个新型的几何校正子网,名为Warpingnet,该子网设计采用级联的粗到精细结构设计,可以直接从采样图像中学习采样网格。其次,我们提出了一个新型的光度补偿子网,称为Compennest,该子网设计使用暹罗体系结构,以捕获投影表面和投影图像之间的光度相互作用,并使用此类信息来补偿经过几何校正的图像。通过将Warpingnet与Compennest连接在一起,Compennest ++可以完成完整的投影仪补偿,并且可以端到端训练。第三,为了提高实用性,我们提出了一种新型的基于合成数据的预训练策略,以显着减少训练图像和训练时间的数量。此外,我们构建了第一个独立于设置的完全补偿基准,以促进未来的研究。在彻底的实验中,我们的方法比以前的艺术表现出明显的优势,并具有有希望的补偿质量,并且实际上很方便。

Full projector compensation aims to modify a projector input image to compensate for both geometric and photometric disturbance of the projection surface. Traditional methods usually solve the two parts separately and may suffer from suboptimal solutions. In this paper, we propose the first end-to-end differentiable solution, named CompenNeSt++, to solve the two problems jointly. First, we propose a novel geometric correction subnet, named WarpingNet, which is designed with a cascaded coarse-to-fine structure to learn the sampling grid directly from sampling images. Second, we propose a novel photometric compensation subnet, named CompenNeSt, which is designed with a siamese architecture to capture the photometric interactions between the projection surface and the projected images, and to use such information to compensate the geometrically corrected images. By concatenating WarpingNet with CompenNeSt, CompenNeSt++ accomplishes full projector compensation and is end-to-end trainable. Third, to improve practicability, we propose a novel synthetic data-based pre-training strategy to significantly reduce the number of training images and training time. Moreover, we construct the first setup-independent full compensation benchmark to facilitate future studies. In thorough experiments, our method shows clear advantages over prior art with promising compensation quality and meanwhile being practically convenient.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源