论文标题

低剂量CT图像重建的基于多层聚类的剩余稀疏转换

Multi-layer Clustering-based Residual Sparsifying Transform for Low-dose CT Image Reconstruction

论文作者

Yang, Xikai, Huang, Zhishen, Long, Yong, Ravishankar, Saiprasad

论文摘要

最近提出的稀疏变换模型产生了低计算成本,并已应用于医学成像。同时,具有嵌套网络结构的深层模型揭示了不同层中学习特征的巨大潜力。在这项研究中,我们提出了一种用于X射线计算机断层扫描(CT)的网络结构稀疏的变换学习方法,我们称之为多层聚类基于多层群集的残留残留稀疏转换(MCST)学习。提出的MCST方案通过将每个层的输入分为几个类,从而了解每一层的多个不同的统一变换。我们将MCST模型应用于低剂量CT(LDCT)重建,通过将学习的MCST模型部署到正规器中,以惩罚的加权最小二乘(PWLS)重建。我们在XCAT Phantom数据和Mayo Clinic数据上进行了LDCT重建实验,并使用2(或3)层培训了MCST模型,每一层都有5个簇。在同一层中学习的转换显示了丰富的特征,而从表示残差提取了其他信息。我们的仿真结果表明,PWLS-MCST比传统的FBP方法和具有边缘呈现(EP)正常使用程序的PWL可以实现更好的图像重建质量。它还胜过最近的高级方法(例如PWLS),其多层剩余变换(MARS)和PWL与具有学识渊博的变换(ULTRA)结合(尤其是用于显示清晰的边缘并保留细微的细节)。

The recently proposed sparsifying transform models incur low computational cost and have been applied to medical imaging. Meanwhile, deep models with nested network structure reveal great potential for learning features in different layers. In this study, we propose a network-structured sparsifying transform learning approach for X-ray computed tomography (CT), which we refer to as multi-layer clustering-based residual sparsifying transform (MCST) learning. The proposed MCST scheme learns multiple different unitary transforms in each layer by dividing each layer's input into several classes. We apply the MCST model to low-dose CT (LDCT) reconstruction by deploying the learned MCST model into the regularizer in penalized weighted least squares (PWLS) reconstruction. We conducted LDCT reconstruction experiments on XCAT phantom data and Mayo Clinic data and trained the MCST model with 2 (or 3) layers and with 5 clusters in each layer. The learned transforms in the same layer showed rich features while additional information is extracted from representation residuals. Our simulation results demonstrate that PWLS-MCST achieves better image reconstruction quality than the conventional FBP method and PWLS with edge-preserving (EP) regularizer. It also outperformed recent advanced methods like PWLS with a learned multi-layer residual sparsifying transform prior (MARS) and PWLS with a union of learned transforms (ULTRA), especially for displaying clear edges and preserving subtle details.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源