论文标题
圆锥形梁CT的端到端记忆有效的重建
End-to-end Memory-Efficient Reconstruction for Cone Beam CT
论文作者
论文摘要
如今,Cone Beam CT在许多医学领域都起着重要作用,但是与常规CT相比,这种成像方式的潜力受到较低的图像质量的阻碍。许多最近的研究已针对依靠深度学习的重建方法。但是,深度学习对CBCT重建的实际应用使几个问题变得复杂,例如完全3D数据的深度学习方法的内存成本极高。在这项工作中,我们解决了这些局限性并提出了LIRE:锥形束重建的一种可逆的原始双重迭代方案。网络的内存需求大大减少,同时保持其表达能力,从而使我们能够使用各向同性2mm Voxel间距,临床相关的投影计数和检测器面板分辨率进行24 GB VRAM的探测器分辨率进行训练。在一组260 + 22个胸部CT扫描中对小型和大型视野设置的两个LIRE型号进行了训练和验证,并使用一组142张胸部CT扫描以及79头\&Neck ct扫描的分发数据集进行了测试。对于这两种设置,我们的方法都超过了两个测试集的经典方法和深度学习基线。在胸部CT集合中,我们的方法可实现33.84 $ \ pm的PM $ 2.28,用于小型FOV设置,35.14 $ \ pm $ \ pm $ 2.69用于大型FOV设置; U-NET基线的PSNR分别为33.08 $ \ PM $ 1.75和34.29 $ \ pm $ 2.71。在头部\&Neck CT套件上,我们的方法可实现39.35 $ \ pm的PM $ 1.75,用于小型FOV设置,41.21 $ \ pm $ 1.41用于大型FOV设置; U-NET基线的PSNR分别为33.08 $ \ PM $ 1.75和34.29 $ \ pm $ 2.71。此外,我们证明可以对LIRE进行填充以使用相同的几何形状重建高分辨率的CBCT数据,但是1mm Voxel间距和较高的探测器面板分辨率也表现出了U-NET基线的表现。
Cone Beam CT plays an important role in many medical fields nowadays, but the potential of this imaging modality is hampered by lower image quality compared to the conventional CT. A lot of recent research has been directed towards reconstruction methods relying on deep learning. However, practical application of deep learning to CBCT reconstruction is complicated by several issues, such as exceedingly high memory costs of deep learning methods for fully 3D data. In this work, we address these limitations and propose LIRE: a learned invertible primal-dual iterative scheme for Cone Beam CT reconstruction. Memory requirements of the network are substantially reduced while preserving its expressive power, enabling us to train on data with isotropic 2mm voxel spacing, clinically-relevant projection count and detector panel resolution on current hardware with 24 GB VRAM. Two LIRE models for small and for large Field-of-View setting were trained and validated on a set of 260 + 22 thorax CT scans and tested using a set of 142 thorax CT scans plus an out-of-distribution dataset of 79 head \& neck CT scans. For both settings, our method surpasses the classical methods and the deep learning baselines on both test sets. On the thorax CT set, our method achieves PSNR of 33.84 $\pm$ 2.28 for the small FoV setting and 35.14 $\pm$ 2.69 for the large FoV setting; U-Net baseline achieves PSNR of 33.08 $\pm$ 1.75 and 34.29 $\pm$ 2.71 respectively. On the head \& neck CT set, our method achieves PSNR of 39.35 $\pm$ 1.75 for the small FoV setting and 41.21 $\pm$ 1.41 for the large FoV setting; U-Net baseline achieves PSNR of 33.08 $\pm$ 1.75 and 34.29 $\pm$ 2.71 respectively. Additionally, we demonstrate that LIRE can be finetuned to reconstruct high-resolution CBCT data with the same geometry but 1mm voxel spacing and higher detector panel resolution, where it outperforms the U-Net baseline as well.