论文标题
多代理增强学习的异步演员批评
Asynchronous Actor-Critic for Multi-Agent Reinforcement Learning
论文作者
论文摘要
在现实设置中跨多个代理的决策同步是有问题的,因为它要求代理人等待其他代理人终止和对终止的交流。理想情况下,代理应该学习和执行异步。这样的异步方法还允许暂时扩展的操作,这些操作可能会根据执行的情况和操作花费不同的时间。不幸的是,当前的策略梯度方法不适用于异步设置,因为他们认为代理在每个时间步骤中都同步推理了动作选择。为了允许异步学习和决策,我们制定了一组异步的多代理参与者 - 批判性方法,这些方法使代理可以在三个标准培训范式中直接优化异步策略:分散学习,集中学习,集中学习和集中培训以进行分解执行。各种现实域中的经验结果(在模拟和硬件中)证明了我们在大型多代理问题中的方法的优势,并验证了我们算法在学习高质量和异步解决方案方面的有效性。
Synchronizing decisions across multiple agents in realistic settings is problematic since it requires agents to wait for other agents to terminate and communicate about termination reliably. Ideally, agents should learn and execute asynchronously instead. Such asynchronous methods also allow temporally extended actions that can take different amounts of time based on the situation and action executed. Unfortunately, current policy gradient methods are not applicable in asynchronous settings, as they assume that agents synchronously reason about action selection at every time step. To allow asynchronous learning and decision-making, we formulate a set of asynchronous multi-agent actor-critic methods that allow agents to directly optimize asynchronous policies in three standard training paradigms: decentralized learning, centralized learning, and centralized training for decentralized execution. Empirical results (in simulation and hardware) in a variety of realistic domains demonstrate the superiority of our approaches in large multi-agent problems and validate the effectiveness of our algorithms for learning high-quality and asynchronous solutions.