论文标题
V2E:从视频帧到现实的DVS事件
v2e: From Video Frames to Realistic DVS Events
论文作者
论文摘要
为了满足对动态视觉传感器(DVS)事件摄像机数据日益增长的需求,本文提出了V2E工具箱,该工具箱从强度帧中生成现实的合成DVS事件。它还阐明了最近文献中关于DVS运动模糊和延迟特征的错误主张。与其他工具箱不同,V2E包括像素级高斯事件阈值不匹配,有限强度依赖性带宽和强度依赖性噪声。现实的DVS事件对于不受控制的照明条件很有用。在两个实验中证明了V2E合成事件的使用。第一个实验是具有N-Caltech 101数据集的对象识别。结果表明,当在Resnet模型上传输在实际DVS数据上时,对各种V2E照明条件进行预审进会改善概括。第二个实验表明,对于夜间驾驶,接受V2E事件训练的汽车探测器显示,与在强度框架上训练的Yolov3相比,平均准确性提高了40%。
To help meet the increasing need for dynamic vision sensor (DVS) event camera data, this paper proposes the v2e toolbox that generates realistic synthetic DVS events from intensity frames. It also clarifies incorrect claims about DVS motion blur and latency characteristics in recent literature. Unlike other toolboxes, v2e includes pixel-level Gaussian event threshold mismatch, finite intensity-dependent bandwidth, and intensity-dependent noise. Realistic DVS events are useful in training networks for uncontrolled lighting conditions. The use of v2e synthetic events is demonstrated in two experiments. The first experiment is object recognition with N-Caltech 101 dataset. Results show that pretraining on various v2e lighting conditions improves generalization when transferred on real DVS data for a ResNet model. The second experiment shows that for night driving, a car detector trained with v2e events shows an average accuracy improvement of 40% compared to the YOLOv3 trained on intensity frames.