论文标题
无模型的非平稳RL:多代理RL和库存控制中的近乎最佳的遗憾和应用
Model-Free Non-Stationary RL: Near-Optimal Regret and Applications in Multi-Agent RL and Inventory Control
论文作者
论文摘要
我们考虑非平稳马尔可夫决策过程中的无模型增强学习(RL)。只要其累积变化不超过某些变化预算,奖励功能和国家过渡功能都可以随时间随时间变化。我们提出了以上置信度范围(RestartQ-UCB)的重新启动Q学习,这是第一个用于非平稳RL的无模型算法,并表明它在动态遗憾方面优于现有解决方案。具体而言,以freedman-type的奖励术语重新启动,以$ \ widetilde {o}(s^{\ frac {1} {1} {3}} {\ frac {\ frac {1} {1} {3}} {3}} {3}} {\ frac {\ frac {\ frac {\ frac {\ frac {\ frac {\ frac {\ frac {\ frac {1}} t^{\ frac {2} {3}})$,其中$ s $和$ a $分别是状态和动作的数量,$Δ> 0 $是变化预算,$ h $是每集的时间步长,$ t $是总时间步。我们进一步提出了一种名为Double-Restart Q-ucb的无参数算法,该算法不需要事先了解变化预算。我们通过建立$ω(s^{\ frac {1} {3} {3}} a^{\ frac {1} {1} {3} {3} {3} {3}}δ^^efac {\ frac {\ frac {1} {1} {3} {3} {3} {2 t^{\ frac {2} {3}})$,是非平稳rl中的第一个下限。数值实验可以根据累积奖励和计算效率来验证RISTARTQ-UCB的优势。我们在跨相关产品的多代理RL和库存控制的示例中证明了我们的结果的力量。
We consider model-free reinforcement learning (RL) in non-stationary Markov decision processes. Both the reward functions and the state transition functions are allowed to vary arbitrarily over time as long as their cumulative variations do not exceed certain variation budgets. We propose Restarted Q-Learning with Upper Confidence Bounds (RestartQ-UCB), the first model-free algorithm for non-stationary RL, and show that it outperforms existing solutions in terms of dynamic regret. Specifically, RestartQ-UCB with Freedman-type bonus terms achieves a dynamic regret bound of $\widetilde{O}(S^{\frac{1}{3}} A^{\frac{1}{3}} Δ^{\frac{1}{3}} H T^{\frac{2}{3}})$, where $S$ and $A$ are the numbers of states and actions, respectively, $Δ>0$ is the variation budget, $H$ is the number of time steps per episode, and $T$ is the total number of time steps. We further present a parameter-free algorithm named Double-Restart Q-UCB that does not require prior knowledge of the variation budget. We show that our algorithms are \emph{nearly optimal} by establishing an information-theoretical lower bound of $Ω(S^{\frac{1}{3}} A^{\frac{1}{3}} Δ^{\frac{1}{3}} H^{\frac{2}{3}} T^{\frac{2}{3}})$, the first lower bound in non-stationary RL. Numerical experiments validate the advantages of RestartQ-UCB in terms of both cumulative rewards and computational efficiency. We demonstrate the power of our results in examples of multi-agent RL and inventory control across related products.