论文标题

通过深逼近对视网膜分形维的强大而有效的计算

Robust and efficient computation of retinal fractal dimension through deep approximation

论文作者

Engelmann, Justin, Villaplana-Velasco, Ana, Storkey, Amos, Bernabeu, Miguel O.

论文摘要

视网膜性状或表型,总结了单个数字的视网膜图像的特定方面。然后可以将其用于进一步的分析,例如使用统计方法。但是,将复杂图像的一个方面减少到一个有意义的数字是具有挑战性的。因此,计算视网膜性状的方法往往是复杂的多步管道,只能应用于高质量的图像。这意味着研究人员通常必须丢弃可用数据的大部分。我们假设可以使用一个更简单的步骤来近似此类管道,从而可以使常见的质量问题变得强大。我们提出了视网膜特征(DART)的深近似,其中使用了深层神经网络,可以预测这些图像的合成降解版本的高质量图像的现有管道的输出。我们使用来自英国生物库的视网膜图像计算出的视网膜分形维度(FD),以先前的工作确定为高质量。我们的方法在看不见的测试图像上与FD吸血鬼表现出很高的一致性(Pearson r = 0.9572)。即使这些图像严重退化,DART仍然可以恢复FD估计值,该估计值与从原始图像中获得的FD吸血鬼表示良好(Pearson R = 0.8817)。这表明我们的方法可以使研究人员将来丢弃更少的图像。我们的方法可以使用单个GPU计算以上超过1,000IMG/s的FD。我们认为这些是非常令人鼓舞的初步结果,并希望将这种方法发展为一种有用的视网膜分析工具。

A retinal trait, or phenotype, summarises a specific aspect of a retinal image in a single number. This can then be used for further analyses, e.g. with statistical methods. However, reducing an aspect of a complex image to a single, meaningful number is challenging. Thus, methods for calculating retinal traits tend to be complex, multi-step pipelines that can only be applied to high quality images. This means that researchers often have to discard substantial portions of the available data. We hypothesise that such pipelines can be approximated with a single, simpler step that can be made robust to common quality issues. We propose Deep Approximation of Retinal Traits (DART) where a deep neural network is used predict the output of an existing pipeline on high quality images from synthetically degraded versions of these images. We demonstrate DART on retinal Fractal Dimension (FD) calculated by VAMPIRE, using retinal images from UK Biobank that previous work identified as high quality. Our method shows very high agreement with FD VAMPIRE on unseen test images (Pearson r=0.9572). Even when those images are severely degraded, DART can still recover an FD estimate that shows good agreement with FD VAMPIRE obtained from the original images (Pearson r=0.8817). This suggests that our method could enable researchers to discard fewer images in the future. Our method can compute FD for over 1,000img/s using a single GPU. We consider these to be very encouraging initial results and hope to develop this approach into a useful tool for retinal analysis.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源