论文标题
E Pluribus Unum易于解释的卷积神经网络
E Pluribus Unum Interpretable Convolutional Neural Networks
论文作者
论文摘要
在高风险领域中采用了卷积神经网络(CNN)模型,因此无法满足社会对决策的透明度的需求。到目前为止,已经出现了越来越多的方法来开发可通过设计解释的CNN模型。但是,这样的模型无法根据人类的看法提供解释,同时保持有能力的绩效。在本文中,我们通过实例化固有可解释的CNN模型的新颖的一般框架来应对这些挑战,该模型名为E pluribus unum Changlable Chanclable CNN(EPU-CNN)。 EPU-CNN模型由CNN子网络组成,每个工程都会收到表达感知特征的输入图像的不同表示,例如颜色或纹理。 EPU-CNN模型的输出由分类预测及其解释组成,其基于输入图像不同区域的感知特征的相对贡献。 EPU-CNN模型已在各种公开可用的数据集以及贡献的基准数据集上进行了广泛的评估。医学数据集用于证明EPU-CNN在医学中对风险敏感的决策的适用性。实验结果表明,与其他CNN体系结构相比,EPU-CNN模型可以实现可比或更好的分类性能,同时提供人类可感知的解释。
The adoption of Convolutional Neural Network (CNN) models in high-stake domains is hindered by their inability to meet society's demand for transparency in decision-making. So far, a growing number of methodologies have emerged for developing CNN models that are interpretable by design. However, such models are not capable of providing interpretations in accordance with human perception, while maintaining competent performance. In this paper, we tackle these challenges with a novel, general framework for instantiating inherently interpretable CNN models, named E Pluribus Unum Interpretable CNN (EPU-CNN). An EPU-CNN model consists of CNN sub-networks, each of which receives a different representation of an input image expressing a perceptual feature, such as color or texture. The output of an EPU-CNN model consists of the classification prediction and its interpretation, in terms of relative contributions of perceptual features in different regions of the input image. EPU-CNN models have been extensively evaluated on various publicly available datasets, as well as a contributed benchmark dataset. Medical datasets are used to demonstrate the applicability of EPU-CNN for risk-sensitive decisions in medicine. The experimental results indicate that EPU-CNN models can achieve a comparable or better classification performance than other CNN architectures while providing humanly perceivable interpretations.