论文标题
学习与事件一起查看
Learning to See Through with Events
论文作者
论文摘要
尽管合成的孔径成像(SAI)可以通过模糊离心的前景阻塞来实现视觉效果,同时从多视图图像中恢复焦点内遮挡场景,但其性能通常会因密集的遮挡和极端照明条件而降低。为了解决这个问题,本文通过依靠事件摄像机获得的延迟极低和高动态范围的异步事件来介绍一种基于事件的SAI(E-SAI)方法。具体而言,收集的事件首先是由重新组件模块重新聚焦的,以使聚焦事件对齐,同时散射非对焦的事件。随后,提出了由尖峰神经网络(SNN)和卷积神经网络(CNN)组成的混合网络,以编码来自重新集中事件的时空信息,并重建遮挡目标的视觉图像。广泛的实验表明,我们提出的E-SAI方法可以在处理非常密集的遮挡和极端照明条件方面实现出色的性能,并从纯事件中产生高质量的图像。代码和数据集可在https://dvs-whu.cn/projects/esai/上找到。
Although synthetic aperture imaging (SAI) can achieve the seeing-through effect by blurring out off-focus foreground occlusions while recovering in-focus occluded scenes from multi-view images, its performance is often deteriorated by dense occlusions and extreme lighting conditions. To address the problem, this paper presents an Event-based SAI (E-SAI) method by relying on the asynchronous events with extremely low latency and high dynamic range acquired by an event camera. Specifically, the collected events are first refocused by a Refocus-Net module to align in-focus events while scattering out off-focus ones. Following that, a hybrid network composed of spiking neural networks (SNNs) and convolutional neural networks (CNNs) is proposed to encode the spatio-temporal information from the refocused events and reconstruct a visual image of the occluded targets. Extensive experiments demonstrate that our proposed E-SAI method can achieve remarkable performance in dealing with very dense occlusions and extreme lighting conditions and produce high-quality images from pure events. Codes and datasets are available at https://dvs-whu.cn/projects/esai/.