论文标题
从像素级嘈杂标签中学习:光场显着检测的新观点
Learning from Pixel-Level Noisy Label : A New Perspective for Light Field Saliency Detection
论文作者
论文摘要
鉴于可用的大量提示,使用光场图像的显着性检测变得有吸引力,但是,这是以大规模像素级注释的数据为代价的,生成昂贵。在本文中,我们建议从从无监督的手工制作的基于特色的显着性方法获得的像素级嘈杂标签中学习光场显着性。鉴于这个目标,一个自然的问题是:我们可以在统一框架中确定清洁标签的同时有效地纳入光场提示之间的关系吗?我们通过将学习作为融合流的联合优化为融合流和场景相关流以生成预测来解决这个问题。特别是,我们首先引入一个像素遗忘的引导融合模块,以相互增强光场特征并在迭代中利用像素一致性,以识别嘈杂的像素。接下来,我们引入了跨场景噪声损失损失,以更好地反映训练数据的潜在结构,并使学习成为噪音不变。多个基准数据集的广泛实验证明了我们的框架的优越性,表明它学习了与最先进的完全监督的光场显着性方法相当的显着性预测。我们的代码可在https://github.com/olobbcode/noiself上找到。
Saliency detection with light field images is becoming attractive given the abundant cues available, however, this comes at the expense of large-scale pixel level annotated data which is expensive to generate. In this paper, we propose to learn light field saliency from pixel-level noisy labels obtained from unsupervised hand crafted featured based saliency methods. Given this goal, a natural question is: can we efficiently incorporate the relationships among light field cues while identifying clean labels in a unified framework? We address this question by formulating the learning as a joint optimization of intra light field features fusion stream and inter scenes correlation stream to generate the predictions. Specially, we first introduce a pixel forgetting guided fusion module to mutually enhance the light field features and exploit pixel consistency across iterations to identify noisy pixels. Next, we introduce a cross scene noise penalty loss for better reflecting latent structures of training data and enabling the learning to be invariant to noise. Extensive experiments on multiple benchmark datasets demonstrate the superiority of our framework showing that it learns saliency prediction comparable to state-of-the-art fully supervised light field saliency methods. Our code is available at https://github.com/OLobbCode/NoiseLF.