论文标题
测量算法解释性:基于人学习的框架和相应的认知复杂度评分
Measuring algorithmic interpretability: A human-learning-based framework and the corresponding cognitive complexity score
论文作者
论文摘要
算法解释性对于建立信任,确保公平和跟踪问责制是必要的。但是,没有现有的正式测量方法用于算法解释性。在这项工作中,我们以编程语言理论和认知负载理论为基础,以开发一个用于衡量算法解释性的框架。提出的测量框架反映了人类学习算法的过程。我们表明,测量框架和由此产生的认知复杂度得分具有以下理想的特性 - 通用性,可计算性,唯一性和单调性。我们通过玩具示例说明了测量框架,描述框架及其概念上的基础,并展示框架的好处,尤其是对于选择算法时考虑折衷方案的经理而言。
Algorithmic interpretability is necessary to build trust, ensure fairness, and track accountability. However, there is no existing formal measurement method for algorithmic interpretability. In this work, we build upon programming language theory and cognitive load theory to develop a framework for measuring algorithmic interpretability. The proposed measurement framework reflects the process of a human learning an algorithm. We show that the measurement framework and the resulting cognitive complexity score have the following desirable properties - universality, computability, uniqueness, and monotonicity. We illustrate the measurement framework through a toy example, describe the framework and its conceptual underpinnings, and demonstrate the benefits of the framework, in particular for managers considering tradeoffs when selecting algorithms.