论文标题

何时(或不)信任智能机器:来自反复游戏的进化游戏理论分析的见解

When to (or not to) trust intelligent machines: Insights from an evolutionary game theory analysis of trust in repeated games

论文作者

Han, The Anh, Perret, Cedric, Powers, Simon T.

论文摘要

智能代理(例如聊天机器人,推荐系统和虚拟助手)的动作通常不完全透明用户。因此,使用这样的代理涉及用户暴露于代理商可能以反对用户目标的方式行事的风险。人们经常认为,人们将信任用作认知快捷方式来减少这种互动的复杂性。在这里,我们通过使用进化游戏理论的方法来研究重复游戏中基于信任的策略的生存能力。这些是相互的策略,只要观察到另一个玩家正在合作。与经典的互惠策略不同,一旦观察到了一个阈值的次数,他们就停止在每轮比赛中检查他们的同事的行为,而只需以某种概率检查。通过这样做,他们降低了验证同事的行动是否合作的机会成本。我们证明,这些基于信任的策略可以在机会成本不可忽略时胜过总是有条件的策略,例如tit-for-tat。我们认为,由于代理商的透明度降低,当人们与智能代理之间的互动之间的互动时,这种成本可能会更大。因此,我们希望人们在与智能代理商的互动中更频繁地使用基于信任的策略。我们的结果为促进人与智能代理之间的相互作用的机制设计提供了新的重要见解,而信任是基本因素。

The actions of intelligent agents, such as chatbots, recommender systems, and virtual assistants are typically not fully transparent to the user. Consequently, using such an agent involves the user exposing themselves to the risk that the agent may act in a way opposed to the user's goals. It is often argued that people use trust as a cognitive shortcut to reduce the complexity of such interactions. Here we formalise this by using the methods of evolutionary game theory to study the viability of trust-based strategies in repeated games. These are reciprocal strategies that cooperate as long as the other player is observed to be cooperating. Unlike classic reciprocal strategies, once mutual cooperation has been observed for a threshold number of rounds they stop checking their co-player's behaviour every round, and instead only check with some probability. By doing so, they reduce the opportunity cost of verifying whether the action of their co-player was actually cooperative. We demonstrate that these trust-based strategies can outcompete strategies that are always conditional, such as Tit-for-Tat, when the opportunity cost is non-negligible. We argue that this cost is likely to be greater when the interaction is between people and intelligent agents, because of the reduced transparency of the agent. Consequently, we expect people to use trust-based strategies more frequently in interactions with intelligent agents. Our results provide new, important insights into the design of mechanisms for facilitating interactions between humans and intelligent agents, where trust is an essential factor.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源