论文标题
确定性政策梯度中的准Newton迭代
Quasi-Newton Iteration in Deterministic Policy Gradient
论文作者
论文摘要
本文为Hessian提供了一个无模型的近似值,以根据策略参数中的准Newton步骤在强化学习的背景下使用确定性政策的性能。我们表明,大约在最佳策略中,大概的Hessian会收敛到确切的Hessian,并且只要策略参数富裕,就可以在学习中进行超线性收敛。自然政策梯度方法可以解释为所提出方法的特定情况。我们在简单的线性情况下分析验证了公式,并将提出方法的收敛与非线性示例中的自然策略梯度进行了比较。
This paper presents a model-free approximation for the Hessian of the performance of deterministic policies to use in the context of Reinforcement Learning based on Quasi-Newton steps in the policy parameters. We show that the approximate Hessian converges to the exact Hessian at the optimal policy, and allows for a superlinear convergence in the learning, provided that the policy parametrization is rich. The natural policy gradient method can be interpreted as a particular case of the proposed method. We analytically verify the formulation in a simple linear case and compare the convergence of the proposed method with the natural policy gradient in a nonlinear example.