论文标题

通过学习结构化的语义一致性,朝着不成熟的多模式医学图像分割

Toward Unpaired Multi-modal Medical Image Segmentation via Learning Structured Semantic Consistency

论文作者

Yang, Jie, Zhu, Ye, Wang, Chaoqun, Li, Zhen, Zhang, Ruimao

论文摘要

集成多模式数据以促进医学图像分析最近引起了人们的关注。本文提出了一种新的方案,以了解不同方式的相互益处,以获得未配对的多模式医学图像的更好分割结果。我们的方法从实际的角度解决了这项任务的两个关键问题:(1)如何有效地学习各种模式的语义一致性(例如CT和MRI),以及(2)如何利用上述一致性以在保持其简单性的同时使网络学习正常化。为了解决(1),我们利用经过精心设计的外部注意模块(EAM)来对齐语义类表示及其不同方式的相关性。为了求解(2),提出的EAM被设计为外部插件播放,一旦模型优化,它就可以丢弃。我们已经证明了所提出的方法在两个医学图像分割场景上的有效性:(1)心脏结构分割,(2)腹部多器官分割。广泛的结果表明,所提出的方法的表现超过了其对应的范围。

Integrating multi-modal data to promote medical image analysis has recently gained great attention. This paper presents a novel scheme to learn the mutual benefits of different modalities to achieve better segmentation results for unpaired multi-modal medical images. Our approach tackles two critical issues of this task from a practical perspective: (1) how to effectively learn the semantic consistencies of various modalities (e.g., CT and MRI), and (2) how to leverage the above consistencies to regularize the network learning while preserving its simplicity. To address (1), we leverage a carefully designed External Attention Module (EAM) to align semantic class representations and their correlations of different modalities. To solve (2), the proposed EAM is designed as an external plug-and-play one, which can be discarded once the model is optimized. We have demonstrated the effectiveness of the proposed method on two medical image segmentation scenarios: (1) cardiac structure segmentation, and (2) abdominal multi-organ segmentation. Extensive results show that the proposed method outperforms its counterparts by a wide margin.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源