论文标题

朝着可扩展和统一的基于示例的解释和异常检测

Toward Scalable and Unified Example-based Explanation and Outlier Detection

论文作者

Chong, Penny, Cheung, Ngai-Man, Elovici, Yuval, Binder, Alexander

论文摘要

当采用神经网络进行高风险决策时,希望他们为其预测提供解释,以便我们了解有助于该决定的功能。同时,重要的是要标记潜在的异常值,以对领域专家进行深入验证。在这项工作中,我们建议将解释性的两个不同方面与异常检测统一。我们主张更广泛地采用基于原型的学生网络,能够为他们的预测提供基于示例的解释,同时识别预测样本和示例之间相似性的区域。这些示例是通过我们的新型迭代原型替代算法从训练集中采样的真实原型案例。此外,我们建议使用原型相似性分数来识别异常值。我们比较了分类,解释质量以及对我们提出的网络与其他基线的异常检测的表现。我们表明,基于原型的网络超出相似性内核提供了有意义的解释和有希望的离群检测结果,而不会损害分类精度。

When neural networks are employed for high-stakes decision-making, it is desirable that they provide explanations for their prediction in order for us to understand the features that have contributed to the decision. At the same time, it is important to flag potential outliers for in-depth verification by domain experts. In this work we propose to unify two differing aspects of explainability with outlier detection. We argue for a broader adoption of prototype-based student networks capable of providing an example-based explanation for their prediction and at the same time identify regions of similarity between the predicted sample and the examples. The examples are real prototypical cases sampled from the training set via our novel iterative prototype replacement algorithm. Furthermore, we propose to use the prototype similarity scores for identifying outliers. We compare performances in terms of the classification, explanation quality, and outlier detection of our proposed network with other baselines. We show that our prototype-based networks beyond similarity kernels deliver meaningful explanations and promising outlier detection results without compromising classification accuracy.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源