论文标题

多个实例学习的模型不可解释性

Model Agnostic Interpretability for Multiple Instance Learning

论文作者

Early, Joseph, Evers, Christine, Ramchurn, Sarvapali

论文摘要

在多个实例学习(MIL)中,使用袋子的实例培训模型,其中每个袋子仅提供一个标签。袋子标签通常仅由袋子中的少数关键实例决定,因此很难解释分类器用来做出决定的信息。在这项工作中,我们建立了解释MIL模型的关键要求。然后,我们继续开发几种符合这些要求的模型不足的方法。将我们的方法与几个数据集上现有可解释的MIL模型进行比较,并提高可解释性准确性高达30%。我们还研究了方法识别实例和规模之间对较大数据集的相互作用的能力,从而提高了它们对现实世界问题的适用性。

In Multiple Instance Learning (MIL), models are trained using bags of instances, where only a single label is provided for each bag. A bag label is often only determined by a handful of key instances within a bag, making it difficult to interpret what information a classifier is using to make decisions. In this work, we establish the key requirements for interpreting MIL models. We then go on to develop several model-agnostic approaches that meet these requirements. Our methods are compared against existing inherently interpretable MIL models on several datasets, and achieve an increase in interpretability accuracy of up to 30%. We also examine the ability of the methods to identify interactions between instances and scale to larger datasets, improving their applicability to real-world problems.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源