论文标题

迈向政策 - 服务框架,以在野外启用合规,可信赖的AI和HRI系统

Towards a Policy-as-a-Service Framework to Enable Compliant, Trustworthy AI and HRI Systems in the Wild

论文作者

Morris, Alexis, Siegel, Hallie, Kelly, Jonathan

论文摘要

建立值得信赖的自主系统是具有挑战性的,出于许多原因,除了简单地试图设计“始终做正确的事”的代理商”。在AI和HRI中,通常不考虑更广泛的背景:可信赖的问题本质上是社会技术的,最终涉及一系列复杂的人为因素和多维关系,这些关系和多维关系可能会在代理,人类,组织和政府和法律机构之间以及对信任的理解和定义之间产生。这种复杂性为可信赖的AI和HRI系统的开发带来了重大障碍 - 而系统开发人员可能希望让自己的系统“始终做正确的事”,他们通常缺乏法律,法规,政策和道德方面的实际工具和专业知识来确保这一结果。在本文中,我们强调可信赖性的“模糊”社会技术方面以及在设计和部署过程中需要仔细考虑的“模糊”社会技术方面。我们希望通过i)通过i)描述在解决可信赖的计算和对可用信任模型的需求时必须考虑的政策格局的讨论,ii)ii)在系统内部的概念(iii)概念(pa aas-as-as-as-as-as-as-as-as-as-as-as-as-as-as-as-as-as-as-as-as-as-as-as-as-as-as-as-as-as-as-as-as-as-as-a a),以实现“ pa a”的概念。工程师在开发期间和(最终)运行时过程中解决信任的模糊问题。我们设想,PAAS方法将卸载政策设计参数的开发并将政策标准维护给政策专家,它将在野外实现运行时信任能力智能系统。

Building trustworthy autonomous systems is challenging for many reasons beyond simply trying to engineer agents that 'always do the right thing.' There is a broader context that is often not considered within AI and HRI: that the problem of trustworthiness is inherently socio-technical and ultimately involves a broad set of complex human factors and multidimensional relationships that can arise between agents, humans, organizations, and even governments and legal institutions, each with their own understanding and definitions of trust. This complexity presents a significant barrier to the development of trustworthy AI and HRI systems---while systems developers may desire to have their systems 'always do the right thing,' they generally lack the practical tools and expertise in law, regulation, policy and ethics to ensure this outcome. In this paper, we emphasize the "fuzzy" socio-technical aspects of trustworthiness and the need for their careful consideration during both design and deployment. We hope to contribute to the discussion of trustworthy engineering in AI and HRI by i) describing the policy landscape that must be considered when addressing trustworthy computing and the need for usable trust models, ii) highlighting an opportunity for trustworthy-by-design intervention within the systems engineering process, and iii) introducing the concept of a "policy-as-a-service" (PaaS) framework that can be readily applied by AI systems engineers to address the fuzzy problem of trust during the development and (eventually) runtime process. We envision that the PaaS approach, which offloads the development of policy design parameters and maintenance of policy standards to policy experts, will enable runtime trust capabilities intelligent systems in the wild.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源