论文标题

Mousegan ++:多种MRI模态合成和小鼠脑的结构分割的无监督分离和对比度表示

MouseGAN++: Unsupervised Disentanglement and Contrastive Representation for Multiple MRI Modalities Synthesis and Structural Segmentation of Mouse Brain

论文作者

Yu, Ziqi, Han, Xiaoyang, Zhang, Shengjie, Feng, Jianfeng, Peng, Tingying, Zhang, Xiao-Yong

论文摘要

分割小鼠大脑在磁共振(MR)图像上的精细结构对于描述形态学区域,分析大脑功能并理解其关系至关重要。与单个MRI模态相比,多模式MRI数据提供了可以通过深度学习模型来利用的互补组织特征,从而得到更好的分割结果。但是,通常缺乏多模式的小鼠脑MRI数据,使小鼠脑细胞结构的自动分割成为非常具有挑战性的任务。为了解决这个问题,有必要融合多模式MRI数据以在不同的大脑结构中产生明显的对比度。因此,我们提出了一种新型的分离和对比的基于GAN的框架,称为Mousegan ++,以以结构性的方式合成单个MR模式的多种MR模式,从而通过归纳缺失的模态和多模式融合来提高分割性能。我们的结果表明,我们方法的翻译性能优于最先进的方法。使用随后学习的模态不变的信息以及模态翻译的图像,Mousegan ++可以分割以90.0%(T2W)和87.9%(T2W)和87.9%(T1W)(T1W)的平均骰子系数细分精细的大脑结构(T1W),与State-Art-Temart Art-Art Art-Art Algorithms相比,可提高+10%的性能改进。我们的结果表明,MouseGAN ++作为同时的图像合成和分割方法,可用于以不成对的方式融合跨模式信息,并在没有多模式数据的情况下产生更强大的性能。我们在https://github.com/yu02019中将方法释放为一种免费学术用法的鼠标脑结构细分工具。

Segmenting the fine structure of the mouse brain on magnetic resonance (MR) images is critical for delineating morphological regions, analyzing brain function, and understanding their relationships. Compared to a single MRI modality, multimodal MRI data provide complementary tissue features that can be exploited by deep learning models, resulting in better segmentation results. However, multimodal mouse brain MRI data is often lacking, making automatic segmentation of mouse brain fine structure a very challenging task. To address this issue, it is necessary to fuse multimodal MRI data to produce distinguished contrasts in different brain structures. Hence, we propose a novel disentangled and contrastive GAN-based framework, named MouseGAN++, to synthesize multiple MR modalities from single ones in a structure-preserving manner, thus improving the segmentation performance by imputing missing modalities and multi-modality fusion. Our results demonstrate that the translation performance of our method outperforms the state-of-the-art methods. Using the subsequently learned modality-invariant information as well as the modality-translated images, MouseGAN++ can segment fine brain structures with averaged dice coefficients of 90.0% (T2w) and 87.9% (T1w), respectively, achieving around +10% performance improvement compared to the state-of-the-art algorithms. Our results demonstrate that MouseGAN++, as a simultaneous image synthesis and segmentation method, can be used to fuse cross-modality information in an unpaired manner and yield more robust performance in the absence of multimodal data. We release our method as a mouse brain structural segmentation tool for free academic usage at https://github.com/yu02019.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源