论文标题
对称相对密度比的自适应阈值近端策略优化
Proximal Policy Optimization with Adaptive Threshold for Symmetric Relative Density Ratio
论文作者
论文摘要
深度加固学习(DRL)是将机器人引入复杂环境的有前途的方法之一。 DRL最近取得的显着进展是政策的正规化,这使该政策能够稳定有效地改善。当密度比超过给定阈值时,一种流行的方法,即所谓的近端策略优化(PPO)及其变体限制了最新和基线策略的密度比。该阈值可以相对直观地设计,实际上已经提出了其建议的价值范围。但是,密度比对其中心不对称,其中心的可能误差量表应接近阈值,这取决于如何给出基线策略。为了最大程度地提高策略正规化的值,本文提出了一种使用相对Pearson(RPE)差异得出的新PPO,因此,所谓的PPO-RPE,以适应阈值。在PPO-RPE中,可以用对称形成的相对密度比取代了原始密度比。得益于这种对称性,可以轻松估算其中心的误差量表,因此,可以将阈值适应估计的误差量表。从三个简单的基准模拟中,揭示了算法依赖性阈值设计的重要性。通过模拟其他四个运动任务,可以验证所提出的方法在统计上通过适当限制策略更新来有助于任务成就。
Deep reinforcement learning (DRL) is one of the promising approaches for introducing robots into complicated environments. The recent remarkable progress of DRL stands on regularization of policy, which allows the policy to improve stably and efficiently. A popular method, so-called proximal policy optimization (PPO), and its variants constrain density ratio of the latest and baseline policies when the density ratio exceeds a given threshold. This threshold can be designed relatively intuitively, and in fact its recommended value range has been suggested. However, the density ratio is asymmetric for its center, and the possible error scale from its center, which should be close to the threshold, would depend on how the baseline policy is given. In order to maximize the values of regularization of policy, this paper proposes a new PPO derived using relative Pearson (RPE) divergence, therefore so-called PPO-RPE, to design the threshold adaptively. In PPO-RPE, the relative density ratio, which can be formed with symmetry, replaces the raw density ratio. Thanks to this symmetry, its error scale from center can easily be estimated, hence, the threshold can be adapted for the estimated error scale. From three simple benchmark simulations, the importance of algorithm-dependent threshold design is revealed. By simulating additional four locomotion tasks, it is verified that the proposed method statistically contributes to task accomplishment by appropriately restricting the policy updates.