论文标题
一个新型的统一条件分数的生成框架,用于多模式医学图像完成
A Novel Unified Conditional Score-based Generative Framework for Multi-modal Medical Image Completion
论文作者
论文摘要
多模式的医学图像完成已广泛应用,以减轻许多多模式诊断任务中缺失的模态问题。但是,对于大多数现有的综合方法,它们缺失模式的推断可能会崩溃为确定性映射,从而忽略了跨模式关系中固有的不确定性。在这里,我们提出了基于统一的多模式条件分数生成模型(UMM-CSGM),以利用基于得分的生成模型(SGM)在建模和随机采样目标概率分布中,并进一步将SGM扩展到统一框架中各种缺失模式配置的交叉模态条件性的跨模式条件。具体而言,UMM-CSGM采用一种新型的多中心多条件条件分数网络(MM-CSN),通过在完整的模态空间中的条件扩散和反向产生来学习一组综合的跨模式条件分布。这样,可以通过所有可用信息来准确地将生成过程准确地调节,并且可以符合单个网络中缺少模式的所有可能配置。 BRATS19数据集的实验表明,UMM-CSGM可以更可靠地综合肿瘤诱导的任何缺失方式的肿瘤诱导病变中的异质增强和不规则面积。
Multi-modal medical image completion has been extensively applied to alleviate the missing modality issue in a wealth of multi-modal diagnostic tasks. However, for most existing synthesis methods, their inferences of missing modalities can collapse into a deterministic mapping from the available ones, ignoring the uncertainties inherent in the cross-modal relationships. Here, we propose the Unified Multi-Modal Conditional Score-based Generative Model (UMM-CSGM) to take advantage of Score-based Generative Model (SGM) in modeling and stochastically sampling a target probability distribution, and further extend SGM to cross-modal conditional synthesis for various missing-modality configurations in a unified framework. Specifically, UMM-CSGM employs a novel multi-in multi-out Conditional Score Network (mm-CSN) to learn a comprehensive set of cross-modal conditional distributions via conditional diffusion and reverse generation in the complete modality space. In this way, the generation process can be accurately conditioned by all available information, and can fit all possible configurations of missing modalities in a single network. Experiments on BraTS19 dataset show that the UMM-CSGM can more reliably synthesize the heterogeneous enhancement and irregular area in tumor-induced lesions for any missing modalities.