论文标题
深度学习可重复性和可解释的AI(XAI)
Deep Learning Reproducibility and Explainable AI (XAI)
论文作者
论文摘要
深度学习(DL)培训算法的无确定性及其对神经网络(NN)模型解释性的影响的影响,在这项工作中得到了图像分类示例。为了讨论这个问题,已经对两个卷积神经网络(CNN)进行了培训,并将其结果进行了比较。该比较旨在探索创建确定性,健壮的DL模型和确定性可解释的人工智能(XAI)的可行性。详细描述了这里所有努力的成功和局限性。这项工作已列出了已达到的确定性模型的源代码。可重复性被索引是模型治理框架的发展阶段组成部分,该框架在AI方法方面提出了欧盟提出的。此外,可重复性是建立因果关系的要求,以解释模型结果并建立信任,以实现AI系统应用的压倒性扩展。在这项工作中检查了在可重复性和处理其中一些的方法中必须解决的问题。
The nondeterminism of Deep Learning (DL) training algorithms and its influence on the explainability of neural network (NN) models are investigated in this work with the help of image classification examples. To discuss the issue, two convolutional neural networks (CNN) have been trained and their results compared. The comparison serves the exploration of the feasibility of creating deterministic, robust DL models and deterministic explainable artificial intelligence (XAI) in practice. Successes and limitation of all here carried out efforts are described in detail. The source code of the attained deterministic models has been listed in this work. Reproducibility is indexed as a development-phase-component of the Model Governance Framework, proposed by the EU within their excellence in AI approach. Furthermore, reproducibility is a requirement for establishing causality for the interpretation of model results and building of trust towards the overwhelming expansion of AI systems applications. Problems that have to be solved on the way to reproducibility and ways to deal with some of them, are examined in this work.