论文标题

使用动画线性预测探索非线性模型的本地解释

Exploring Local Explanations of Nonlinear Models Using Animated Linear Projections

论文作者

Spyrison, Nicholas, Cook, Dianne, Biecek, Przemyslaw

论文摘要

机器学习模型的预测能力提高是取决于复杂性和解释性丧失的成本,尤其是与参数统计模型相比。这种权衡导致了可解释的AI(XAI)的出现,该方法提供了诸如局部解释(LES)和局部变量属性(LVA)之类的方法,以阐明模型如何使用预测指标来达到预测。这些提供了单个观测值附近线性变量重要性的点估计。但是,LVA往往无法有效处理预测因子之间的关联。为了了解预测变量之间的相互作用如何影响可变重要性估计值,我们可以将LVA转换为线性预测并使用径向游览。这对于学习模型如何犯错,离群值的效果或观察结果的效果也很有用。该方法以分类(企鹅种类,巧克力类型)和定量(足球/足球薪水,房价)的响应模型的示例进行了说明。这些方法是在Cran上可用的R软件包Cheem中实现的。

The increased predictive power of machine learning models comes at the cost of increased complexity and loss of interpretability, particularly in comparison to parametric statistical models. This trade-off has led to the emergence of eXplainable AI (XAI) which provides methods, such as local explanations (LEs) and local variable attributions (LVAs), to shed light on how a model use predictors to arrive at a prediction. These provide a point estimate of the linear variable importance in the vicinity of a single observation. However, LVAs tend not to effectively handle association between predictors. To understand how the interaction between predictors affects the variable importance estimate, we can convert LVAs into linear projections and use the radial tour. This is also useful for learning how a model has made a mistake, or the effect of outliers, or the clustering of observations. The approach is illustrated with examples from categorical (penguin species, chocolate types) and quantitative (soccer/football salaries, house prices) response models. The methods are implemented in the R package cheem, available on CRAN.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源