论文标题
LightDepth:通过课程学习处理地面真相稀疏的一种资源有效的深度估计方法
LightDepth: A Resource Efficient Depth Estimation Approach for Dealing with Ground Truth Sparsity via Curriculum Learning
论文作者
论文摘要
神经网络的进步能够解决复杂的计算机视觉任务,例如以前所未有的精度对室外场景的深度估算。对深度估计的有希望的研究。但是,当前的努力是计算资源密集型的,并且不考虑自动设备(例如机器人和无人机)的资源限制。在这项工作中,我们提出了一种快速和电池有效的方法,以进行深度估计。我们的方法设计了基于模型的课程学习,以进行深度估计。我们的实验表明,我们的模型的准确性与最先进的模型相同,而其响应时间的表现将其他模型的表现高出71%。所有代码均可在https://github.com/fatemehkarimii/lightdepth在线获得。
Advances in neural networks enable tackling complex computer vision tasks such as depth estimation of outdoor scenes at unprecedented accuracy. Promising research has been done on depth estimation. However, current efforts are computationally resource-intensive and do not consider the resource constraints of autonomous devices, such as robots and drones. In this work, we present a fast and battery-efficient approach for depth estimation. Our approach devises model-agnostic curriculum-based learning for depth estimation. Our experiments show that the accuracy of our model performs on par with the state-of-the-art models, while its response time outperforms other models by 71%. All codes are available online at https://github.com/fatemehkarimii/LightDepth.