论文标题

通过交替的背突出来检测,从嘈杂标签中学习噪音吸引的编码器编码器

Learning Noise-Aware Encoder-Decoder from Noisy Labels by Alternating Back-Propagation for Saliency Detection

论文作者

Zhang, Jing, Xie, Jianwen, Barnes, Nick

论文摘要

在本文中,我们提出了一个噪音吸引的编码器框架,以从嘈杂的训练示例中解开清洁显着性预测器,其中嘈杂的标签是由无监督的基于手工制作的基于手动特征的方法生成的。所提出的模型由通过神经网络参数为参数的两个子模型组成:(1)显着性预测指标,将输入图像映射到清洁显着图,以及(2)噪声发生器,噪声发生器,该噪声发生器是从高斯潜在矢量产生噪声的潜在变量模型。代表嘈杂标签的整个模型是两个子模型的总和。训练模型的目的是估计两个子模型的参数,并同时推断每个嘈杂标签的相应潜在向量。我们建议通过使用交替的后填充(ABP)算法来训练该模型,该算法交替进行以下两个步骤:(1)通过梯度上升来估算两个子模型的参数,以及(2)推断背部的反向量,(2)推断出训练型噪声示例的潜在探测器,以通过LangeDementics推断训练的较差示例。为了防止网络融合到微不足道的解决方案,我们利用边缘感知的平滑度损失来正规化隐藏的显着性图,以具有与相应图像相似的结构。几个基准数据集的实验结果表明该模型的有效性。

In this paper, we propose a noise-aware encoder-decoder framework to disentangle a clean saliency predictor from noisy training examples, where the noisy labels are generated by unsupervised handcrafted feature-based methods. The proposed model consists of two sub-models parameterized by neural networks: (1) a saliency predictor that maps input images to clean saliency maps, and (2) a noise generator, which is a latent variable model that produces noises from Gaussian latent vectors. The whole model that represents noisy labels is a sum of the two sub-models. The goal of training the model is to estimate the parameters of both sub-models, and simultaneously infer the corresponding latent vector of each noisy label. We propose to train the model by using an alternating back-propagation (ABP) algorithm, which alternates the following two steps: (1) learning back-propagation for estimating the parameters of two sub-models by gradient ascent, and (2) inferential back-propagation for inferring the latent vectors of training noisy examples by Langevin Dynamics. To prevent the network from converging to trivial solutions, we utilize an edge-aware smoothness loss to regularize hidden saliency maps to have similar structures as their corresponding images. Experimental results on several benchmark datasets indicate the effectiveness of the proposed model.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源