论文标题

多代理游戏的有条件模仿学习

Conditional Imitation Learning for Multi-Agent Games

论文作者

Shih, Andy, Ermon, Stefano, Sadigh, Dorsa

论文摘要

尽管多学院学习的进步使培训越来越复杂,但大多数现有的技术都会产生最终政策,而不是旨在适应新合作伙伴的策略。但是,我们希望我们的AI代理商根据周围人员的策略来调整其策略。在这项工作中,我们研究了有条件的多代理模仿学习的问题,在训练时间我们可以访问联合轨迹示范,我们必须在测试时与新合作伙伴进行互动并适应新的合作伙伴。这种设置具有挑战性,因为我们必须推断出新的合作伙伴的战略并将我们的政策调整为该战略,而所有这些都不了解环境奖励或动态。我们将这个有条件多代理模仿学习的问题正式化,并提出了一种解决可伸缩性和数据稀缺困难的新方法。我们的关键见解是,多代理游戏中合作伙伴之间的变化通常是高度结构化的,并且可以通过低级子空间表示。利用张量分解的工具,我们的模型了解了对自我和合作伙伴代理策略的低排放子空间,然后通过在子空间中插值来注入并适应新的合作伙伴策略。我们实验了包括土匪,粒子和哈纳比环境在内的协作任务的混合。此外,我们在熟练游戏过度的用户研究中测试针对真正的人类合作伙伴的条件政策。与基线相比,我们的模型更适合新合作伙伴,并且可以鲁棒地处理不同的设置,从离散/连续行动以及与AI/人类合作伙伴的静态/在线评估相比。

While advances in multi-agent learning have enabled the training of increasingly complex agents, most existing techniques produce a final policy that is not designed to adapt to a new partner's strategy. However, we would like our AI agents to adjust their strategy based on the strategies of those around them. In this work, we study the problem of conditional multi-agent imitation learning, where we have access to joint trajectory demonstrations at training time, and we must interact with and adapt to new partners at test time. This setting is challenging because we must infer a new partner's strategy and adapt our policy to that strategy, all without knowledge of the environment reward or dynamics. We formalize this problem of conditional multi-agent imitation learning, and propose a novel approach to address the difficulties of scalability and data scarcity. Our key insight is that variations across partners in multi-agent games are often highly structured, and can be represented via a low-rank subspace. Leveraging tools from tensor decomposition, our model learns a low-rank subspace over ego and partner agent strategies, then infers and adapts to a new partner strategy by interpolating in the subspace. We experiments with a mix of collaborative tasks, including bandits, particle, and Hanabi environments. Additionally, we test our conditional policies against real human partners in a user study on the Overcooked game. Our model adapts better to new partners compared to baselines, and robustly handles diverse settings ranging from discrete/continuous actions and static/online evaluation with AI/human partners.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源