论文标题
达到目标很难:解决随机最短路径的样本复杂性
Reaching Goals is Hard: Settling the Sample Complexity of the Stochastic Shortest Path
论文作者
论文摘要
我们研究了在随机路径(SSP)问题中学习$ε$最佳政策的样本复杂性。当学习者可以访问生成模型时,我们首先得出样本复杂性边界。我们表明,存在具有$ s $状态,$ a $的最差SSP实例,最低成本,$ c _ {\ min} $,以及所有状态的最佳政策的最大预期成本$ b _ {\ star} $,任何算法都至少需要$ω(sab _ {sab _ {\ star} $ sampe^3/sampem^3/sampem^3/ + $ε$ - 最佳策略具有很高的可能性。令人惊讶的是,这意味着每当$ c _ {\ min} = 0 $时,SSP问题可能无法学习,从而表明SSP中的学习比在有限的Horizon和折扣设置中更难。当可用的最佳政策的击球时间以及我们通过与有限的打击时间竞争限制最优性时,我们将与下限进行补充。最后,在这些情况下,我们设计了具有匹配上限的算法。这解决了与生成模型的SSP中学习$ε$最佳策略的样本复杂性。 我们还启动了学习$ε$ - 最佳政策的研究,而无需访问生成模型(即所谓的最佳政策识别问题),并表明通常不可能学习样品有效学习。另一方面,如果我们假设代理可以通过支付固定成本直接从任何状态达到目标状态,则可以实现高效的学习。然后,我们在此假设下建立了第一个上限和下限。 最后,使用类似的分析工具,我们证明,在一般成本下,SSP不可能无视遗憾,从而解决了(Tarbouriech等,2021c)的开放问题。
We study the sample complexity of learning an $ε$-optimal policy in the Stochastic Shortest Path (SSP) problem. We first derive sample complexity bounds when the learner has access to a generative model. We show that there exists a worst-case SSP instance with $S$ states, $A$ actions, minimum cost $c_{\min}$, and maximum expected cost of the optimal policy over all states $B_{\star}$, where any algorithm requires at least $Ω(SAB_{\star}^3/(c_{\min}ε^2))$ samples to return an $ε$-optimal policy with high probability. Surprisingly, this implies that whenever $c_{\min}=0$ an SSP problem may not be learnable, thus revealing that learning in SSPs is strictly harder than in the finite-horizon and discounted settings. We complement this result with lower bounds when prior knowledge of the hitting time of the optimal policy is available and when we restrict optimality by competing against policies with bounded hitting time. Finally, we design an algorithm with matching upper bounds in these cases. This settles the sample complexity of learning $ε$-optimal polices in SSP with generative models. We also initiate the study of learning $ε$-optimal policies without access to a generative model (i.e., the so-called best-policy identification problem), and show that sample-efficient learning is impossible in general. On the other hand, efficient learning can be made possible if we assume the agent can directly reach the goal state from any state by paying a fixed cost. We then establish the first upper and lower bounds under this assumption. Finally, using similar analytic tools, we prove that horizon-free regret is impossible in SSPs under general costs, resolving an open problem in (Tarbouriech et al., 2021c).