论文标题

实现道德敏感的机器人澄清请求

Enabling Morally Sensitive Robotic Clarification Requests

论文作者

Jackson, Ryan Blake, Williams, Tom

论文摘要

当前面向自然语言的机器人体系结构的设计使某些建筑组件能够规避道德推理能力。一个例子是,一旦人类话语发现了参考歧义,就会立即发出反思性的澄清请求。如先前的研究所示,这可能会导致机器人(1)误解其道德倾向,以及(2)在目前的情况下削弱了人类对道德规范的看法或应用。我们通过对模棱两可的人类话语的每种潜在歧义进行道德推理,并做出相应的反应,而不是立即要求澄清来提出道德推理,从而提出了解决这些问题的解决方案。我们在DIARC机器人体系结构中实施解决方案,据我们所知,这是唯一具有道德推理和澄清请求生成功能的机器人体系结构。然后,我们通过人类受试者实验评估我们的方法,其结果表明我们的方法成功地改善了这两个确定的关注点。

The design of current natural language oriented robot architectures enables certain architectural components to circumvent moral reasoning capabilities. One example of this is reflexive generation of clarification requests as soon as referential ambiguity is detected in a human utterance. As shown in previous research, this can lead robots to (1) miscommunicate their moral dispositions and (2) weaken human perception or application of moral norms within their current context. We present a solution to these problems by performing moral reasoning on each potential disambiguation of an ambiguous human utterance and responding accordingly, rather than immediately and naively requesting clarification. We implement our solution in the DIARC robot architecture, which, to our knowledge, is the only current robot architecture with both moral reasoning and clarification request generation capabilities. We then evaluate our method with a human subjects experiment, the results of which indicate that our approach successfully ameliorates the two identified concerns.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源