论文标题

增强福祉评估是对AI的实践实施和基于权利的规范原则的实际实施的基础

Enhanced well-being assessment as basis for the practical implementation of ethical and rights-based normative principles for AI

论文作者

Havrda, Marek, Rakova, Bogdana

论文摘要

人工智能(AI)对所有人的生计都有越来越多的影响。详细介绍了现有的跨学科和跨学科指标框架,可以带来新的见解,并使从业者能够应对理解和评估自主和智能系统(A/IS)的影响的挑战。关于学者,政府,民权组织和技术公司提出的基本道德和基于权利的AI原则的基本AI原则已经有了新的共识。为了从原则到现实世界实施,我们采用了由监管影响评估和公共政策中的福祉运动动机的视角。与公共政策干预类似,AI系统实施的结果可能会产生深远的复杂影响。在公共政策中,指标只是更广泛的工具箱的一部分,因为指标固有地导致了激励和目标的游戏和解散。同样,在A/IS的情况下,需要更大的工具箱,该工具箱允许迭代评估已确定的影响,在分析中包含新影响以及识别新兴的权衡。在本文中,我们建议对A/的增强福祉影响评估框架的实际应用,该框架可用于解决AI中的道德和基于权利的规范原则。这个过程可以使以人为中心的算法支持AI系统影响的方法。最后,我们提出了一个新的测试基础设施,该基础设施将允许政府,民权组织和其他人与A/IS开发人员合作,以实施增强的福祉影响评估。

Artificial Intelligence (AI) has an increasing impact on all areas of people's livelihoods. A detailed look at existing interdisciplinary and transdisciplinary metrics frameworks could bring new insights and enable practitioners to navigate the challenge of understanding and assessing the impact of Autonomous and Intelligent Systems (A/IS). There has been emerging consensus on fundamental ethical and rights-based AI principles proposed by scholars, governments, civil rights organizations, and technology companies. In order to move from principles to real-world implementation, we adopt a lens motivated by regulatory impact assessments and the well-being movement in public policy. Similar to public policy interventions, outcomes of AI systems implementation may have far-reaching complex impacts. In public policy, indicators are only part of a broader toolbox, as metrics inherently lead to gaming and dissolution of incentives and objectives. Similarly, in the case of A/IS, there's a need for a larger toolbox that allows for the iterative assessment of identified impacts, inclusion of new impacts in the analysis, and identification of emerging trade-offs. In this paper, we propose the practical application of an enhanced well-being impact assessment framework for A/IS that could be employed to address ethical and rights-based normative principles in AI. This process could enable a human-centered algorithmically-supported approach to the understanding of the impacts of AI systems. Finally, we propose a new testing infrastructure which would allow for governments, civil rights organizations, and others, to engage in cooperating with A/IS developers towards implementation of enhanced well-being impact assessments.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源