论文标题

人工智能和自动决策的风险调节的规范性挑战

Normative Challenges of Risk Regulation of Artificial Intelligence and Automated Decision-Making

论文作者

Orwat, Carsten, Bareis, Jascha, Folberth, Anja, Jahnel, Jutta, Wadephul, Christian

论文摘要

旨在调节人工智能(AI)和自动决策(ADM)的最新建议提出了一种特定的风险调节形式,即一种基于风险的方法。最突出的例子是欧洲委员会提出的《人工智能法》(AIA)。本文解决了对适当风险法规的挑战,主要源于涉及的特定风险类型,即保护基本权利和基本社会价值的风险。它们主要是由于基本权利和社会价值观的规范性歧义,在解释,指定或操作它们以进行风险评估时。这是为了(1)人类尊严,(2)信息自决,数据保护和隐私,(3)正义与公平,以及(4)共同利益。规范性歧义需要规范性选择,这些选择分布在拟议的AIA中的不同参与者之间。特别关键的规范性选择是选择规范概念来指定风险,汇总和量化风险,包括使用指标,价值冲突的平衡,设置可接受的风险水平以及标准化。为了避免缺乏民主合法性和法律不确定性,提出了科学和政治辩论。

Recent proposals aiming at regulating artificial intelligence (AI) and automated decision-making (ADM) suggest a particular form of risk regulation, i.e. a risk-based approach. The most salient example is the Artificial Intelligence Act (AIA) proposed by the European Commission. The article addresses challenges for adequate risk regulation that arise primarily from the specific type of risks involved, i.e. risks to the protection of fundamental rights and fundamental societal values. They result mainly from the normative ambiguity of the fundamental rights and societal values in interpreting, specifying or operationalising them for risk assessments. This is exemplified for (1) human dignity, (2) informational self-determination, data protection and privacy, (3) justice and fairness, and (4) the common good. Normative ambiguities require normative choices, which are distributed among different actors in the proposed AIA. Particularly critical normative choices are those of selecting normative conceptions for specifying risks, aggregating and quantifying risks including the use of metrics, balancing of value conflicts, setting levels of acceptable risks, and standardisation. To avoid a lack of democratic legitimacy and legal uncertainty, scientific and political debates are suggested.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源