论文标题

3D分割指导的基于样式的生成对抗网络用于PET合成

3D Segmentation Guided Style-based Generative Adversarial Networks for PET Synthesis

论文作者

Zhou, Yang, Yang, Zhiwen, Zhang, Hui, Chang, Eric I-Chao, Fan, Yubo, Xu, Yan

论文摘要

全剂量正电子发射断层扫描(PET)成像中的潜在放射性危害仍然是一个关注点,而低剂量图像的质量对于临床使用是不可取的。因此,将低剂量宠物图像转化为全剂量是非常有趣的。基于深度学习方法的先前研究通常直接提取重建的分层特征。我们注意到每个功能的重要性都不同,并且应该对它们进行加权差异,以便神经网络可以捕获微小的信息。此外,在某些应用中,对某些目标区域的合成很重要。在这里,我们提出了一种新型的基于样式的引导基于样式的生成对抗网络(SGSGAN),以用于PET合成。 (1)我们提出了采用样式调制的基于样式的发电机,该发电机专门控制翻译过程中的层次功能,以生成具有更真实的纹理的图像。 (2)我们采用任务驱动的策略,该策略将分割任务与生成对抗网络(GAN)框架结合起来,以提高翻译性能。广泛的实验表明,我们在PET合成中的整体框架的优越性,尤其是在那些感兴趣的地区。

Potential radioactive hazards in full-dose positron emission tomography (PET) imaging remain a concern, whereas the quality of low-dose images is never desirable for clinical use. So it is of great interest to translate low-dose PET images into full-dose. Previous studies based on deep learning methods usually directly extract hierarchical features for reconstruction. We notice that the importance of each feature is different and they should be weighted dissimilarly so that tiny information can be captured by the neural network. Furthermore, the synthesis on some regions of interest is important in some applications. Here we propose a novel segmentation guided style-based generative adversarial network (SGSGAN) for PET synthesis. (1) We put forward a style-based generator employing style modulation, which specifically controls the hierarchical features in the translation process, to generate images with more realistic textures. (2) We adopt a task-driven strategy that couples a segmentation task with a generative adversarial network (GAN) framework to improve the translation performance. Extensive experiments show the superiority of our overall framework in PET synthesis, especially on those regions of interest.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源