论文标题
用于图像分割的源 - 重新质量域的适应
Source-Relaxed Domain Adaptation for Image Segmentation
论文作者
论文摘要
域的适应性(DA)吸引了其适应在标记的源数据上训练的模型以在未标记或弱标记的目标数据上训练的模型的能力。最常见的DA技术需要同时访问源域和目标域的输入图像。但是,实际上,通常在适应阶段无法使用源图像。例如,当源和目标图像来自不同的临床部位时,这是医学成像中非常频繁的DA情况。我们提出了一种用于适应分割网络的新型公式,这会放松这种约束。我们的公式是基于最小化目标域数据定义的无标签熵损失,我们在分割区域的域不变先验进行指导。许多先生可以使用,源自解剖信息。在这里,通过辅助网络学到了类比率,并以我们整体损失函数的kullback-leibler(kl)差异的形式集成。我们显示了我们先前感知的熵最小化在跨不同MRI模态的脊柱分割中的有效性。我们的方法与几种最先进的适应技术相当,即使可以访问更少的信息,但在适应阶段就没有源图像。我们直接向前的适应策略仅使用一个网络,与流行的对抗技术相反,如果没有源图像的存在,就无法执行。我们的框架很容易与各种先验和细分问题一起使用。
Domain adaptation (DA) has drawn high interests for its capacity to adapt a model trained on labeled source data to perform well on unlabeled or weakly labeled target data from a different domain. Most common DA techniques require the concurrent access to the input images of both the source and target domains. However, in practice, it is common that the source images are not available in the adaptation phase. This is a very frequent DA scenario in medical imaging, for instance, when the source and target images come from different clinical sites. We propose a novel formulation for adapting segmentation networks, which relaxes such a constraint. Our formulation is based on minimizing a label-free entropy loss defined over target-domain data, which we further guide with a domain invariant prior on the segmentation regions. Many priors can be used, derived from anatomical information. Here, a class-ratio prior is learned via an auxiliary network and integrated in the form of a Kullback-Leibler (KL) divergence in our overall loss function. We show the effectiveness of our prior-aware entropy minimization in adapting spine segmentation across different MRI modalities. Our method yields comparable results to several state-of-the-art adaptation techniques, even though is has access to less information, the source images being absent in the adaptation phase. Our straight-forward adaptation strategy only uses one network, contrary to popular adversarial techniques, which cannot perform without the presence of the source images. Our framework can be readily used with various priors and segmentation problems.