论文标题
朝着可靠且可解释的AI模型用于固体肺结节诊断
Towards Reliable and Explainable AI Model for Solid Pulmonary Nodule Diagnosis
论文作者
论文摘要
肺癌的死亡率是世界上致命癌的最高死亡率。早期检测对于治疗肺癌至关重要。但是,肺结节的检测和准确诊断在很大程度上取决于放射科医生的经验,对它们来说可能是沉重的工作量。已经开发了计算机辅助诊断(CAD)系统,以帮助放射科医生进行结节检测和诊断,从而大大减轻了工作量的同时提高诊断精度。深度学习的最新发展大大改善了CAD系统的性能。但是,缺乏模型可靠性和可解释性仍然是其大规模临床应用的主要障碍。在这项工作中,我们提出了一个可解释的肺结核诊断的可解释的深度学习模型。我们的神经模型不仅可以预测病变恶性肿瘤,而且可以确定相关的表现。此外,每种表现形式的位置也可以可视化,以进行视觉解释性。我们提出的神经模型在LIDC公共数据集上实现了0.992的AUC,并且在我们的内部数据集上的AUC测试为0.923。此外,我们的实验结果证明,通过将表现识别任务纳入多任务模型,也可以提高恶性分类的准确性。这种可解释的多任务模型可以为在临床环境中更好地与放射科医生更好地相互作用提供方案。
Lung cancer has the highest mortality rate of deadly cancers in the world. Early detection is essential to treatment of lung cancer. However, detection and accurate diagnosis of pulmonary nodules depend heavily on the experiences of radiologists and can be a heavy workload for them. Computer-aided diagnosis (CAD) systems have been developed to assist radiologists in nodule detection and diagnosis, greatly easing the workload while increasing diagnosis accuracy. Recent development of deep learning, greatly improved the performance of CAD systems. However, lack of model reliability and interpretability remains a major obstacle for its large-scale clinical application. In this work, we proposed a multi-task explainable deep-learning model for pulmonary nodule diagnosis. Our neural model can not only predict lesion malignancy but also identify relevant manifestations. Further, the location of each manifestation can also be visualized for visual interpretability. Our proposed neural model achieved a test AUC of 0.992 on LIDC public dataset and a test AUC of 0.923 on our in-house dataset. Moreover, our experimental results proved that by incorporating manifestation identification tasks into the multi-task model, the accuracy of the malignancy classification can also be improved. This multi-task explainable model may provide a scheme for better interaction with the radiologists in a clinical environment.