论文标题

用于医学图像分割的完全卷积变压器

The Fully Convolutional Transformer for Medical Image Segmentation

论文作者

Tragakis, Athanasios, Kaul, Chaitanya, Murray-Smith, Roderick, Husmeier, Dirk

论文摘要

我们提出了一个新型的变压器模型,能够分割不同方式的医学图像。医学图像分析的细粒度性质带来的挑战意味着变压器对分析的适应仍处于新生的阶段。 UNET的压倒性成功在于它能够欣赏细分任务的细粒度本质,而现有基于变压器的模型目前没有这种能力。为了解决这一缺点,我们提出了完全卷积的变压器(FCT),该变压器(FCT)建立在卷积神经网络学习有效图像表示形式的可靠能力的基础上,并将它们与变形金刚在其输入中有效捕获长期依赖的能力相结合。 FCT是医学成像文献中第一个完全卷积的变压器模型。它在两个阶段处理其输入,首先,它学会从输入图像中提取远距离的语义依赖性,然后学会从功能中捕获层次结构的全局属性。 FCT紧凑,准确且健壮。我们的结果表明,在多个医疗图像分割数据集的不同数据模式的数据集中,它在不需要任何预训练的情况下都超过了所有现有的变压器架构。 FCT在ACDC数据集上的直接竞争对手在Synapse数据集上的竞争对手在Spleen DataSet上的表现为4.4%,在骰子度量标准的ISIC 2017数据集上的竞争对手最多为1.1%,最多五倍的参数。我们的代码,环境和模型将通过GitHub提供。

We propose a novel transformer model, capable of segmenting medical images of varying modalities. Challenges posed by the fine grained nature of medical image analysis mean that the adaptation of the transformer for their analysis is still at nascent stages. The overwhelming success of the UNet lay in its ability to appreciate the fine-grained nature of the segmentation task, an ability which existing transformer based models do not currently posses. To address this shortcoming, we propose The Fully Convolutional Transformer (FCT), which builds on the proven ability of Convolutional Neural Networks to learn effective image representations, and combines them with the ability of Transformers to effectively capture long-term dependencies in its inputs. The FCT is the first fully convolutional Transformer model in medical imaging literature. It processes its input in two stages, where first, it learns to extract long range semantic dependencies from the input image, and then learns to capture hierarchical global attributes from the features. FCT is compact, accurate and robust. Our results show that it outperforms all existing transformer architectures by large margins across multiple medical image segmentation datasets of varying data modalities without the need for any pre-training. FCT outperforms its immediate competitor on the ACDC dataset by 1.3%, on the Synapse dataset by 4.4%, on the Spleen dataset by 1.2% and on ISIC 2017 dataset by 1.1% on the dice metric, with up to five times fewer parameters. Our code, environments and models will be available via GitHub.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源