论文标题

无奖励RL比线性马尔可夫决策过程中的奖励感知RL难得不比奖励感知的RL难

Reward-Free RL is No Harder Than Reward-Aware RL in Linear Markov Decision Processes

论文作者

Wagenmaker, Andrew, Chen, Yifang, Simchowitz, Max, Du, Simon S., Jamieson, Kevin

论文摘要

无奖励加固学习(RL)考虑了代理在探索过程中无法访问奖励功能的设置,但必须提出仅在探索后才揭示的任意奖励功能的近乎最佳的政策。在表格环境中,众所周知,这比奖励感知(PAC)RL(代理商在探索过程中可以访问奖励功能)更加困难,在两个设置中具有最佳样本复杂性,其差异为$ | \ Mathcal {s} | $,是状态空间的大小。我们表明,在线性MDP的设置中,这种分离不存在。我们首先在$ d $二维线性MDP中开发了一种计算高效算法,其样品复杂度比例为$ \ widetilde {\ Mathcal {o}}(d^2 H^5/ε^2)$。然后,我们显示一个下限,匹配尺寸依赖性为$ω(d^2 H^2/ε^2)$,该$可用于奖励感知的RL设置。据我们所知,我们的方法是第一个在线性MDP中获得最佳$ d $依赖性的计算有效算法,即使在单次奖励PAC设置中也是如此。我们的算法依赖于一种新的过程,该过程有效地遍历了线性MDP,在任何给定的``特征方向''中收集样品,并在最大状态访问概率(线性MDP等于)中享受最佳的样品复杂性。我们表明,该探索过程也可以应用于解决线性MDP中获得````良好条件''''协变量的问题。

Reward-free reinforcement learning (RL) considers the setting where the agent does not have access to a reward function during exploration, but must propose a near-optimal policy for an arbitrary reward function revealed only after exploring. In the the tabular setting, it is well known that this is a more difficult problem than reward-aware (PAC) RL -- where the agent has access to the reward function during exploration -- with optimal sample complexities in the two settings differing by a factor of $|\mathcal{S}|$, the size of the state space. We show that this separation does not exist in the setting of linear MDPs. We first develop a computationally efficient algorithm for reward-free RL in a $d$-dimensional linear MDP with sample complexity scaling as $\widetilde{\mathcal{O}}(d^2 H^5/ε^2)$. We then show a lower bound with matching dimension-dependence of $Ω(d^2 H^2/ε^2)$, which holds for the reward-aware RL setting. To our knowledge, our approach is the first computationally efficient algorithm to achieve optimal $d$ dependence in linear MDPs, even in the single-reward PAC setting. Our algorithm relies on a novel procedure which efficiently traverses a linear MDP, collecting samples in any given ``feature direction'', and enjoys a sample complexity scaling optimally in the (linear MDP equivalent of the) maximal state visitation probability. We show that this exploration procedure can also be applied to solve the problem of obtaining ``well-conditioned'' covariates in linear MDPs.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源