论文标题

具有有限的时间保证的最大可能的逆增强学习

Maximum-Likelihood Inverse Reinforcement Learning with Finite-Time Guarantees

论文作者

Zeng, Siliang, Li, Chenliang, Garcia, Alfredo, Hong, Mingyi

论文摘要

逆强化学习(IRL)旨在恢复奖励功能和相关的最佳政策,最适合观察到的状态序列和专家实施的行动。 IRL的许多算法具有固有的嵌套结构:内部循环找到了给定参数化奖励的最佳策略,而外循环更新了估计值以优化拟合度的度量。对于高维环境,此类嵌套环结构需要重大的计算负担。为了减少嵌套循环的计算负担,SQIL [1]和IQ-Learn [2]等新颖方法强调策略估计以奖励估计的准确性为代价。但是,如果没有准确的估计奖励,就无法进行反事实分析,例如在不同的环境动态和/或学习新任务下预测最佳策略。在本文中,我们为IRL开发了一种新型的单环算法,该算法不会损害奖励估计精度。在提出的算法中,每个策略改进步骤之后是可能性最大化的随机梯度步骤。我们表明,提出的算法可证明具有有限时间保证的固定解决方案。如果奖励是线性化的参数化,我们显示已确定的解决方案对应于最大熵IRL问题的解决方案。最后,通过在Mujoco及其传输设置中使用机器人技术控制问题,我们表明,与其他IRL和模仿学习基准相比,所提出的算法的性能卓越。

Inverse reinforcement learning (IRL) aims to recover the reward function and the associated optimal policy that best fits observed sequences of states and actions implemented by an expert. Many algorithms for IRL have an inherently nested structure: the inner loop finds the optimal policy given parametrized rewards while the outer loop updates the estimates towards optimizing a measure of fit. For high dimensional environments such nested-loop structure entails a significant computational burden. To reduce the computational burden of a nested loop, novel methods such as SQIL [1] and IQ-Learn [2] emphasize policy estimation at the expense of reward estimation accuracy. However, without accurate estimated rewards, it is not possible to do counterfactual analysis such as predicting the optimal policy under different environment dynamics and/or learning new tasks. In this paper we develop a novel single-loop algorithm for IRL that does not compromise reward estimation accuracy. In the proposed algorithm, each policy improvement step is followed by a stochastic gradient step for likelihood maximization. We show that the proposed algorithm provably converges to a stationary solution with a finite-time guarantee. If the reward is parameterized linearly, we show the identified solution corresponds to the solution of the maximum entropy IRL problem. Finally, by using robotics control problems in MuJoCo and their transfer settings, we show that the proposed algorithm achieves superior performance compared with other IRL and imitation learning benchmarks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源