论文标题

通过非参数状态熵估算的政策梯度探索任务无关探索

Task-Agnostic Exploration via Policy Gradient of a Non-Parametric State Entropy Estimate

论文作者

Mutti, Mirco, Pratissoli, Lorenzo, Restelli, Marcello

论文摘要

在无奖励环境中,代理商追求的合适内在目标是什么,以便它可以学习最佳的任务侵略性勘探政策?在本文中,我们认为,有限 - 摩尼子轨迹引起的状态分布的熵是一个明智的目标。尤其是,我们提出了一种新颖而实用的政策搜索算法,最大的熵政策优化(MEPOL),以学习最大程度地提高非参数,$ k $ neart的邻居估计国家分布熵的政策。与已知的方法相反,MEPOL完全不含模型,因为它既不需要估计任何策略的状态分布也不需要建模过渡动态。然后,我们从经验上表明,MEPOL允许在高维,连续控制域中学习最大的渗透探索策略,以及该政策如何促进在下游学习各种有意义的基于奖励的任务。

In a reward-free environment, what is a suitable intrinsic objective for an agent to pursue so that it can learn an optimal task-agnostic exploration policy? In this paper, we argue that the entropy of the state distribution induced by finite-horizon trajectories is a sensible target. Especially, we present a novel and practical policy-search algorithm, Maximum Entropy POLicy optimization (MEPOL), to learn a policy that maximizes a non-parametric, $k$-nearest neighbors estimate of the state distribution entropy. In contrast to known methods, MEPOL is completely model-free as it requires neither to estimate the state distribution of any policy nor to model transition dynamics. Then, we empirically show that MEPOL allows learning a maximum-entropy exploration policy in high-dimensional, continuous-control domains, and how this policy facilitates learning a variety of meaningful reward-based tasks downstream.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源