论文标题

具有有限数据的超声心动图分割的对比度预处理

Contrastive Pretraining for Echocardiography Segmentation with Limited Data

论文作者

Saeed, Mohamed, Muhtaseb, Rand, Yaqub, Mohammad

论文摘要

对比度学习已在许多应用程序中有限的许多应用中有用。缺乏注释数据在医学图像分割中尤其有问题,因为很难让临床专家手动注释大量数据,例如心脏超声图像中的心脏结构。在本文中,我们提出了一种自我监督的对比学习方法,以在存在有限的注释图像时从超声心动图中分割左心室。此外,我们研究了对比预处理对两个众所周知的分割网络UNET和DEEPLABV3的影响。我们的结果表明,对比预处理有助于提高左心室分割的性能,尤其是当注释数据稀缺时。我们展示了如何以自我监督的方式训练模型时,与最先进的完全监督算法取得可比的结果,然后仅对5%的数据进行微调。我们表明,我们的解决方案优于当前在大型公共数据集(Echonet-Dynegic)上达到的骰子分数为0.9252的内容。我们还将解决方案在另一个较小的数据集(CAMUS)上的性能进行比较,以证明我们提出的解决方案的普遍性。该代码可在(https://github.com/biomedia-mbzuai/contrastive-echo)上找到。

Contrastive learning has proven useful in many applications where access to labelled data is limited. The lack of annotated data is particularly problematic in medical image segmentation as it is difficult to have clinical experts manually annotate large volumes of data such as cardiac structures in ultrasound images of the heart. In this paper, We propose a self supervised contrastive learning method to segment the left ventricle from echocardiography when limited annotated images exist. Furthermore, we study the effect of contrastive pretraining on two well-known segmentation networks, UNet and DeepLabV3. Our results show that contrastive pretraining helps improve the performance on left ventricle segmentation, particularly when annotated data is scarce. We show how to achieve comparable results to state-of-the-art fully supervised algorithms when we train our models in a self-supervised fashion followed by fine-tuning on just 5\% of the data. We show that our solution outperforms what is currently published on a large public dataset (EchoNet-Dynamic) achieving a Dice score of 0.9252. We also compare the performance of our solution on another smaller dataset (CAMUS) to demonstrate the generalizability of our proposed solution. The code is available at (https://github.com/BioMedIA-MBZUAI/contrastive-echo).

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源