论文标题
基于Lyapunov的强化学习,用于分散的多代理控制
Lyapunov-Based Reinforcement Learning for Decentralized Multi-Agent Control
论文作者
论文摘要
分散的多代理控制具有广泛的应用,从多机器人合作到分布式传感器网络。在分散的多代理控制中,系统很复杂,具有未知或高度不确定的动态,几乎无法应用传统的基于模型的控制方法。与控制理论中的基于模型的控制相比,深度加强学习(DRL)有望从数据中学习控制器/策略,而无需了解系统动态。但是,由于代理之间的相互作用使学习环境非平稳,因此直接将DRL应用于分散的多代理控制是具有挑战性的。更重要的是,现有的多机构增强学习(MARL)算法无法从控制理论的角度确保多机构系统的闭环稳定性,因此,在实际应用中产生异常或危险的行为是很有可能的。因此,没有稳定性保证,现有的MARL算法在实际多代理系统中的应用非常关注,例如,无人机,机器人和电力系统等。在本文中,我们的目标是提出一种新的MARL算法,以用于分散的多代理控制,并提供稳定的保证。新的MARL算法被称为多代理演员评论家(MASAC),是在“集中式培训 - 分节式执行”的著名框架下提出的。通过在MASAC算法的政策改进期间引入稳定性约束,可以保证闭环稳定性。稳定性约束是基于Lyapunov控制理论的方法设计的。为了证明有效性,我们提出了一个多代理导航示例,以显示拟议的MASAC算法的效率。
Decentralized multi-agent control has broad applications, ranging from multi-robot cooperation to distributed sensor networks. In decentralized multi-agent control, systems are complex with unknown or highly uncertain dynamics, where traditional model-based control methods can hardly be applied. Compared with model-based control in control theory, deep reinforcement learning (DRL) is promising to learn the controller/policy from data without the knowing system dynamics. However, to directly apply DRL to decentralized multi-agent control is challenging, as interactions among agents make the learning environment non-stationary. More importantly, the existing multi-agent reinforcement learning (MARL) algorithms cannot ensure the closed-loop stability of a multi-agent system from a control-theoretic perspective, so the learned control polices are highly possible to generate abnormal or dangerous behaviors in real applications. Hence, without stability guarantee, the application of the existing MARL algorithms to real multi-agent systems is of great concern, e.g., UAVs, robots, and power systems, etc. In this paper, we aim to propose a new MARL algorithm for decentralized multi-agent control with a stability guarantee. The new MARL algorithm, termed as a multi-agent soft-actor critic (MASAC), is proposed under the well-known framework of "centralized-training-with-decentralized-execution". The closed-loop stability is guaranteed by the introduction of a stability constraint during the policy improvement in our MASAC algorithm. The stability constraint is designed based on Lyapunov's method in control theory. To demonstrate the effectiveness, we present a multi-agent navigation example to show the efficiency of the proposed MASAC algorithm.