论文标题
强大的场景推断噪声双重损坏
Robust Scene Inference under Noise-Blur Dual Corruptions
论文作者
论文摘要
由于捕获的图像中的严重噪音,弱光下的场景推断是一个具有挑战性的问题。减少噪音的一种方法是在捕获过程中使用更长的曝光。但是,在有运动(场景或相机运动)的存在下,更长的暴露会导致运动模糊,从而导致图像信息的丢失。这在这两种图像降解之间创造了一个权衡的权衡:运动模糊(由于长期暴露)与噪声(由于曝光较短),也称为本文中的双图像损坏对。随着摄像机的兴起,能够同时捕获同一场景的多次暴露,因此可以克服这一权衡。我们的主要观察结果是,尽管这些不同图像捕获的降解的数量和性质各不相同,但在所有图像中,语义内容均保持不变。为此,我们提出了一种方法,以利用这些多曝光捕获在弱光和运动下的鲁棒推理。我们的方法建立在功能一致性损失的基础上,以鼓励这些单个捕获的类似结果,并将其最终预测的合奏用于强大的视觉识别。我们证明了方法对模拟图像的有效性以及具有多个暴露的真实捕获,以及对象检测和图像分类的任务。
Scene inference under low-light is a challenging problem due to severe noise in the captured images. One way to reduce noise is to use longer exposure during the capture. However, in the presence of motion (scene or camera motion), longer exposures lead to motion blur, resulting in loss of image information. This creates a trade-off between these two kinds of image degradations: motion blur (due to long exposure) vs. noise (due to short exposure), also referred as a dual image corruption pair in this paper. With the rise of cameras capable of capturing multiple exposures of the same scene simultaneously, it is possible to overcome this trade-off. Our key observation is that although the amount and nature of degradation varies for these different image captures, the semantic content remains the same across all images. To this end, we propose a method to leverage these multi exposure captures for robust inference under low-light and motion. Our method builds on a feature consistency loss to encourage similar results from these individual captures, and uses the ensemble of their final predictions for robust visual recognition. We demonstrate the effectiveness of our approach on simulated images as well as real captures with multiple exposures, and across the tasks of object detection and image classification.