论文标题
DeepMix:移动性感知,轻巧和混合3D对象检测
DeepMix: Mobility-aware, Lightweight, and Hybrid 3D Object Detection for Headsets
论文作者
论文摘要
移动耳机应该能够理解3D物理环境,以提供真正的沉浸式体验,以实现增强/混合现实(AR/MR)。但是,它们的小型因素和有限的计算资源使在实时3D Vision算法中执行的计算资源极具挑战性,众所周知,这些算法比其2D对应物更加强度。在本文中,我们提出了DeepMix,一种移动性,轻量级和混合3D对象检测框架,以改善移动耳机上AR/MR的用户体验。 DeepMix通过对最先进的3D对象检测模型的分析和评估进行了启发,智能地结合了边缘辅助的2D对象检测和新颖的,设备3D边界框的估算,以利用由耳机捕获的深度数据。这导致端到端的潜伏期低,并且在移动方案中显着提高了检测准确性。 DeepMix的一个独特功能是,它充分利用了耳机的迁移率以微调检测结果并提高检测准确性。据我们所知,DeepMix是实现30 fps的第一个3D对象检测(端到端延迟远低于交互式AR/MR的100 ms严格要求)。我们在Microsoft Hololens上实施了DeepMix的原型,并通过广泛的受控实验和30多名参与者的用户研究来评估其性能。与使用现有的3D对象检测模型的基线相比,DeepMix不仅将检测准确性提高了9.1--37.3%,而且将端到端潜伏期降低了2.68--9.15x。
Mobile headsets should be capable of understanding 3D physical environments to offer a truly immersive experience for augmented/mixed reality (AR/MR). However, their small form-factor and limited computation resources make it extremely challenging to execute in real-time 3D vision algorithms, which are known to be more compute-intensive than their 2D counterparts. In this paper, we propose DeepMix, a mobility-aware, lightweight, and hybrid 3D object detection framework for improving the user experience of AR/MR on mobile headsets. Motivated by our analysis and evaluation of state-of-the-art 3D object detection models, DeepMix intelligently combines edge-assisted 2D object detection and novel, on-device 3D bounding box estimations that leverage depth data captured by headsets. This leads to low end-to-end latency and significantly boosts detection accuracy in mobile scenarios. A unique feature of DeepMix is that it fully exploits the mobility of headsets to fine-tune detection results and boost detection accuracy. To the best of our knowledge, DeepMix is the first 3D object detection that achieves 30 FPS (an end-to-end latency much lower than the 100 ms stringent requirement of interactive AR/MR). We implement a prototype of DeepMix on Microsoft HoloLens and evaluate its performance via both extensive controlled experiments and a user study with 30+ participants. DeepMix not only improves detection accuracy by 9.1--37.3% but also reduces end-to-end latency by 2.68--9.15x, compared to the baseline that uses existing 3D object detection models.