论文标题
学会用视觉变压器估算莎普利价值观
Learning to Estimate Shapley Values with Vision Transformers
论文作者
论文摘要
变形金刚已成为计算机视觉中的默认架构,但是了解驱动其预测的原因仍然是一个具有挑战性的问题。当前的解释方法依赖于注意值或输入梯度,但是这些方法对模型的依赖性提供了有限的看法。 Shapley值在理论上提供了一种替代方案,但是它们的计算成本使它们对于大型高维模型不切实际。在这项工作中,我们旨在使Shapley价值观对视觉变形金刚(VIT)实用。为此,我们首先利用一种注意力掩盖方法来评估VIT的部分信息,然后我们开发了一个程序,通过单独的,学识渊博的解释器模型来生成Shapley价值解释。我们的实验将沙普利值与许多基线方法(例如,注意推出,Gradcam,LRP)进行了比较,我们发现我们的方法比现有的VIT方法提供了更准确的解释。
Transformers have become a default architecture in computer vision, but understanding what drives their predictions remains a challenging problem. Current explanation approaches rely on attention values or input gradients, but these provide a limited view of a model's dependencies. Shapley values offer a theoretically sound alternative, but their computational cost makes them impractical for large, high-dimensional models. In this work, we aim to make Shapley values practical for vision transformers (ViTs). To do so, we first leverage an attention masking approach to evaluate ViTs with partial information, and we then develop a procedure to generate Shapley value explanations via a separate, learned explainer model. Our experiments compare Shapley values to many baseline methods (e.g., attention rollout, GradCAM, LRP), and we find that our approach provides more accurate explanations than existing methods for ViTs.