论文标题
MaskingDepth:半监督单眼估计的掩蔽一致性正规化
MaskingDepth: Masked Consistency Regularization for Semi-supervised Monocular Depth Estimation
论文作者
论文摘要
我们提出了MaskingDepth,这是一种新型的半监督学习框架,用于单眼深度估计,以减轻对大型基地深度量的依赖。 MaskingDepth旨在在强烈的未标记数据与源自弱凸显的未标记数据中得出的伪标签之间的伪标记之间实现一致性,从而使学习深度无需监督。在此框架中,提出了一种新型的数据增强,以将幼稚的掩盖策略作为增强作用,同时避免了其弱点和强烈的分支深度之间的规模歧义问题,以及缺失小规模实例的风险。为了仅保留弱提名分支的高度自信深度预测,我们还提出了一种不确定性估计技术,该技术用于定义稳健的一致性正则化。与其他最先进的半眼科深度估计的最新最新的半监测方法相比,Kitti和NYU-DEPTH-V2数据集的实验证明了每个组件对使用较少的深度注册图像的鲁棒性,以及与其他最先进的半监督方法相比,其稳健性以及卓越的性能。此外,我们表明我们的方法很容易扩展到域的适应任务。我们的代码可在https://github.com/ku-cvlab/maskingdepth上找到。
We propose MaskingDepth, a novel semi-supervised learning framework for monocular depth estimation to mitigate the reliance on large ground-truth depth quantities. MaskingDepth is designed to enforce consistency between the strongly-augmented unlabeled data and the pseudo-labels derived from weakly-augmented unlabeled data, which enables learning depth without supervision. In this framework, a novel data augmentation is proposed to take the advantage of a naive masking strategy as an augmentation, while avoiding its scale ambiguity problem between depths from weakly- and strongly-augmented branches and risk of missing small-scale instances. To only retain high-confident depth predictions from the weakly-augmented branch as pseudo-labels, we also present an uncertainty estimation technique, which is used to define robust consistency regularization. Experiments on KITTI and NYU-Depth-v2 datasets demonstrate the effectiveness of each component, its robustness to the use of fewer depth-annotated images, and superior performance compared to other state-of-the-art semi-supervised methods for monocular depth estimation. Furthermore, we show our method can be easily extended to domain adaptation task. Our code is available at https://github.com/KU-CVLAB/MaskingDepth.